Next Article in Journal
Fluid–Structure Interaction Analysis of a Bionic Robotic Fish Based on a Macrofiber Composite Material
Previous Article in Journal
Teaching Bioinspired Design for Assistive Technologies Using Additive Manufacturing: A Collaborative Experience
Previous Article in Special Issue
Improved Zebra Optimization Algorithm with Multi Strategy Fusion and Its Application in Robot Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm

1
School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
2
College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
3
College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China
4
Zhejiang LINIX Motor Co., Ltd., Dongyang 322118, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 392; https://doi.org/10.3390/biomimetics10060392
Submission received: 20 April 2025 / Revised: 1 June 2025 / Accepted: 9 June 2025 / Published: 11 June 2025

Abstract

:
Although existing motion transfer methods for bionic robot arms are based on kinematic equivalence or simplified dynamic models, they frequently fail to tackle dynamic compliance and real-time adaptability in complex human-like motions. To address this shortcoming, this study presents a motion transfer method from the human arm to a bionic robot arm based on the hybrid PSO-RF (Particle Swarm Optimization-Random Forest) algorithm to improve joint space mapping accuracy and dynamic compliance. Initially, a high-precision optical motion capture (Mocap) system was utilized to record human arm trajectories, and Kalman filtering and a Rauch–Tung–Striebel (RTS) smoother were applied to reduce noise and phase lag. Subsequently, the joint angles of the human arm were computed through geometric vector analysis. Although geometric vector analysis offers an initial estimation of joint angles, its deterministic framework is subject to error accumulation caused by the occlusion of reflective markers and kinematic singularities. To surmount this limitation, this study designed five action sequences for the establishment of the training database for the PSO-RF model to predict joint angles when performing different actions. Ultimately, an experimental platform was built to validate the motion transfer method, and the experimental verification showed that the system attained high prediction accuracy (R2 = 0.932 for the elbow joint angle) and real-time performance with a latency of 0.1097 s. This paper promotes compliant human–robot interaction by dealing with joint-level dynamic transfer challenges, presenting a framework for applications in intelligent manufacturing and rehabilitation robotics.

1. Introduction

As technology progresses, bionic robots have rapidly developed, demonstrating great potential in fields such as intelligent manufacturing, field operation, and medical rehabilitation [1,2,3]. The human arm has a complex motion, containing multi-joint forward flexion and backward extension, abduction and adduction, and other motions [4]. Therefore, when the bionic robotic arm performs tasks it needs to maintain similar dynamic behaviors to the human body motion, and the traditional robotic arm trajectory planning methods are difficult to adapt to highly dynamic and complex motions [5]. As such, it is necessary to analyze the human motion, extract the motion characteristics of the human arm during different motions, and apply them in the trajectory planning and control algorithm of a bionic robot arm.
Motion capture (Mocap) systems are employed to acquire kinematic trajectories of human body motions to analyze or reproduce complex motion trajectories [6,7]. Human Mocap can be categorized into the following four specialized systems tailored to distinct application scenarios: inertial Mocap systems [8], ultrasonic Mocap systems [9], vision-based Mocap systems [10], and optical Mocap systems [11,12].
Based on the Mocap systems, Jia et al. [13] developed a robotic arm trajectory planning method leveraging the learning and generalization capabilities of Dynamic Movement Primitives (DMPs), demonstrating 54.2% decrease in average trajectory deviation. Yu et al. [14] proposed a robotic skill-learning framework integrating motion and impedance features, utilizing electromyography (EMG) to estimate human upper-limb stiffness and employing DMPs for the dual-modal encoding of kinematic trajectories and impedance parameters. This method was validated in a KINOVA robotic water-pumping task, achieving a 96.2% success rate, 37.5% reduction in peak human–robot interaction forces, and limiting trajectory tracking errors to within ±1.8 mm. Vuga et al. [15] presented a real-time humanoid motion transfer system using low-cost RGB-D sensing, enabling coordinated improvement of motion imitation fidelity.
Existing studies predominantly focus on end-effector trajectory transfer while neglecting the dynamic transfer of joint angles. The continuity of human joint motion is crucial for achieving compliant and coordinated arm movements. To enhance the compliance of arm motion transfer, Zhao et al. [16] developed a human-like motion planner leveraging Human Arm Motion Patterns (HAMPs) for robotic handover tasks, implementing task decomposition, HAMP feature extraction, and duration-optimized primitive sequencing. Experimental validation on the KUKA IIWA robot demonstrates that the generated trajectories accurately transfer the joint coordination characteristics inherent in human arm movements. Current methodologies predominantly rely on manual parameter tuning, exhibit limited generalization capability, and suffer from a lack of interpretability in deep learning models, thereby compromising the robustness of bionic robotic arms in complex operational scenarios.
Diverging from prior research on motion transfer methods for bionic robot arms, this paper collects human arm motion data through a high-precision optical Mocap system, extracts joint angle features by the geometric vector method, uses modified particle swarm optimization to optimize the hyperparameters of the random forest model, establishes a mapping model of human arm motion–joint angle, and verifies the motion transfer method through prototype experiment.
The structure of this paper is organized as follows: Section 1 delineates fundamental challenges in human-to-robot motion transfer through a comprehensive literature review, establishing theoretical foundations for bionic robot arm control. Section 2 arranges the motion capture system and incorporates Kalman filtering and an RTS smoother for trajectory processing. Section 3 details the geometric vector analysis framework for human arm kinematics modeling, and introduces the hybrid PSO-RF algorithm architecture, specifying the particle swarm optimization parameters and random forest configuration strategies. Section 4 constructs the experimental platform and verifies the proposed PSO-RF algorithm on the prototype of the bionic robot arm. The superiority of the algorithm is verified through a comparison with existing methods. Section 5 critically discusses the advantages and disadvantages of the research work. Section 6 describes the conclusion of the article.

2. Motion Capture Solution

2.1. Arrangement of the Mocap System

To achieve the real-time capture of complex human arm motions, this paper employs the Nokov Mocap System [17], configured with twelve infrared Mars-12H cameras (100 Hz frame rate, ±0.08 mm accuracy), a host computer, and some reflective markers, as shown in Figure 1. The proprietary XINGYING 3.2 software suite provides 3D coordinate trajectory data of the reflective markers, and the end-to-end processing latency can be controlled within 3 ms.

2.2. Motion Capture Recognition

The optical Mocap system relies on reflective markers to capture the motion of objects. By reasonably arranging the reflective markers at the positions of the human arm, the rotation angle of the human arm joints can be calculated according to the geometric vector method. Reflective makers are arranged at the hip, shoulder, elbow, wrist joint, and palm, respectively. The hip joint reflective maker serves as the coordinate origin of the human arm motion model. The arrangement position of reflective markers as shown in Figure 2a.
Under proper lighting conditions, with potential hazards eliminated and the safety of the experimental environment ensured, the motion capture experiment was set up to simulate the human action of pouring water. The single-group motion flow is illustrated in Figure 2b, and the specific experimental steps are as follows:
(i)
Vertically place the hand beside the cup.
(ii)
Extend the thumb, grip the cup, and lift it to a height of one palm.
(iii)
Tilt the wrist 90 degrees and maintain this position for 2 s, then rotate the wrist 90 degrees in the opposite direction to return it to the neutral position and lower the hand.
(iv)
Straighten the palm and return to the initial position.
When repeatedly collecting the same motions, the participant should keep the same state to enhance the data stability.

2.3. Coordinate Trajectory Processing

The acquired trajectories of the reflective makers during human arm motion exhibit high-frequency noise due to intrinsic physiological tremors and extrinsic vibrations caused by environmental interactions. Because the noise exhibits Gaussian characteristics, and the trajectories adhere to linear kinematic constraints, it can be seen as a linear Gaussian (LG) system. The discrete-time state-space model [18] can be described as follows:
x k = F k | k 1 x k 1 + w k ,   w k ~ N ( 0 , Q k ) z k = H k x k + v k ,   v k ~ N ( 0 , R k )
where x k is the state vector, z k is the trajectory of markers, F k | k 1 is the state transition matrix, H k is the observation matrix, Q k is the process noise covariance, R k is the measurement noise covariance, w k is the process noise, and v k is the measurement noise.
According to Kalman filtering [19], the iteration of the model from Equation (1) can be described as follows:
(i)
Prediction Step:
x ^ k | k 1 = F k x ^ k 1 | k 1 P k | k 1 = F k P k 1 | k 1 F k + Q k
(ii)
Prediction Step:
K k = P k | k 1 H k ( H k P k | k 1 H k + R k ) 1 x ^ k | k = x ^ k | k 1 + K k ( z k H k x ^ k | k 1 ) P k | k = ( I K k H k ) P k | k 1
The shoulder joint coordinates obtained in the experiment of Figure 2b are processed by Kalman filtering, and the obtained comparison image is shown in Figure 3.
The sampling frequency of the Mokov Mocap system is 100 Hz, meaning that every 100 frames represent 1 s. After filtering the raw data, the estimated result is still different from the expected result, and the Kalman filtered trajectories have a slight phase lag. A Rauch–Tung–Striebel (RTS) smoother [20] was employed to further reduce the fluctuation of the estimated result and maintain the consistency of the phase.
Define C k as the smooth gain matrix, and as such the smooth gain can be calculated as follows:
C k = P k | k F k + 1 ( P k + 1 | k ) 1
Update the smooth state as follows:
x ^ k | N = x ^ k | k + C k ( x ^ k + 1 | N x ^ k + 1 | k )             P k | N = P k | k + C k ( P k + 1 | N P k + 1 | k ) C k
where N is the total number of trajectory sampling points.
After employing an RTS smoother, the obtained comparison image is shown in Figure 4.
According to Figure 4, the filtered data maintains good phase consistency compared to the original data and eliminates the original noise.

3. Methodology

3.1. Calculation of the Human Arm Model Joint Angles

Through the motion capture system, the 3D coordinates of reflective markers can be obtained when the subject performs different tasks. To achieve motion transfer between the arm and the robotic arm, it is necessary to acquire the joint angle data of the arm based on the information of the reflective markers.
The establishment of the arm model is shown in Figure 5. The coordinate system O h X h Y h Z h is established by taking the right hip joint O h as the coordinate origin. Where J S is the shoulder joint, J E is the elbow joint, J W is the wrist joint, and J 1 , J 2 , and J 3 are the reflective markers at the palm, respectively.
γ 1 is the angle between plane X h O h Y h and plane J E O h J E , representing the flexion and extension of the shoulder joint, which can be expressed as follows:
γ 1 = O X h × O Y h , O J E × O J E
where point J E is the mapping point plane X h O h Z h , and so construct a line through point J S , parallel to line O h X h , intersecting plane X h O h Z h at point J S .
γ 2 is the angle between line J S J E and line J S J S , representing the adduction and abduction of the shoulder joint, and this can be expressed as follows:
γ 2 = J S J E , J S J S
γ 3 is the angle between plane J S J E J W and plane X h O h Y h , representing the internal and external rotation of the shoulder joint, which can be expressed as follows:
γ 3 = J S J E × J E J W , O Z h
γ 4 is the angle between line J S J E and line J E J W , representing the flexion and extension of the elbow joint, which can be expressed as follows:
γ 4 = J S J E , J E J W
γ 5 is the angle between plane J E J W J 1 and plane X h O h Y h , representing the pronation and supination of the forearm, which can be expressed as follows:
γ 5 = J E J W × J W J 1 , O Z h
γ 6 is the angle between plane J W J 1 J 3 and line J E J W , representing the dorsiflexion and palmar flexion of the wrist joint, which can be expressed as follows:
γ 6 = J W J 1 × J W J 3 , J E J W
γ 7 is the angle between line J W J 2 and line J E J W , representing the radial and ulnar deviation of the wrist joint, which can be expressed as follows:
γ 7 = J W J 2 , J E J W
The rotation angles of the seven joints of the human arm can be calculated by Equations (6)–(12). To obtain accurate calculation results, the participant needs to avoid sudden changes during arm motion to prevent data loss.
While geometric vector analysis offers a deterministic framework for calculating joint angles based on spatial coordinates, its reliance on idealized assumptions limits its applicability in real-world scenarios. Specifically, incomplete sensor data, noise interference, and singular configurations in complex human motions, such as extreme joint rotations or overlapping limb trajectories, can degrade the accuracy of purely geometric solutions [21,22]; hence, this paper employs a joint angle prediction model to address these challenges.

3.2. PSO-RF Algorithm

To predict the joint angles of the human arm in different motions, this paper sets up five different sets of motions and analyzes the rotation of the shoulder, elbow, and wrist joint of a human arm, as shown in Table 1.
Before the formal start of the experiment, the participant should be acquainted with the entire experimental procedure and safety precautions. The participant should execute the experimental motions in the Mokov Mocap system, and complete each preset within 10 s.
The focus of the joint angle prediction model is to establish a regression model that takes inputs from human arm trajectories to predict the various joint angles when performing complex motions. While random forest (RF) demonstrates inherent advantages in handling nonlinear biomechanical relationships through ensemble learning, its manual hyperparameter tuning proves inadequate due to its inherent constraints, which are its subjective dependency on empirical settings and its suboptimal generalization performance under motion variability. To address these limitations, this study develops a hybrid PSO-RF architecture [23,24] that leverages the global search capability of particle swarm optimization (PSO) to optimize the hyperparameters of the RF algorithm. The PSO-RF training architecture is shown in Figure 6.
  • Database establishment:
Kinematic data were collected using the Mokov Mocap system under controlled laboratory conditions. Reflective markers were pasted onto the experimental participants in the configuration shown in Figure 5 to track the 3D trajectories during the predefined actions listed in Table 1. The participant was required to perform standardized motions, and each action in the five action sequences in Table 1 was repeated 50 times, with each single action to be completed within 10 s. In total, 250 sets of training data were obtained, with 75% used for training and 25% for prediction.
  • Initial:
Define n trees , d max , and m features as the number, maximum degree, and characteristic sampling ratio of the RF model, respectively, X i 7 × 3 = O h , J S , J E , J W , J 1 , J 2 , J 3 T as the input of the PSO-RF model, and Θ i 7 = γ 1 , γ 2 , γ 3 , γ 4 , γ 5 , γ 6 , γ 7 T as the output of the PSO-RF model. Therefore, the training dataset can be described as S = ( X i , Θ i ) .
The hyperparameters of the RF model are defined as the following initial particle positions:
ϑ PSO = n trees , d max , m features T
According to the input and output characteristics, the constraint range is specified as follows:
n trees [ 50 , 200 ] , d max [ 5 , 30 ] , m features [ 0.2 , 0.8 ]
Validation:
To evaluate the generalization performance of the PSO-RF model, the dataset is partitioned using hold-out validation [25]. Specifically, 75% of the experimental data (denoted as S t r a i n = ( X i t r a i n , Θ i t r a i n ) ) is randomly selected for model training, while the remaining 25% (denoted as S v a l i d = ( X i v a l i d , Θ i v a l i d ) ) is reserved for validation.
Given the inherent complexity of human arm motion transfer, it is critical to prioritize explained variance and generalization in the trajectory mapping framework. To address these requirements, R squared (R2) is adopted as the optimization objective, and the fitness function is constructed as follows:
R 2 = 1 i = 1 n ( γ i γ ^ i ) 2 i = 1 n ( γ i γ ¯ ) 2 F ( ϑ PSO ) = R 2 ( Θ i , Θ ^ i )
where γ i is the measured angles, γ ^ i is the predicted angles, and γ ¯ is the average value of the measured angles.
  • Update:
For the particle i at the iteration t , the following can be obtained:
v i ( t + 1 ) = ω v i ( t ) + c 1 r 1 p i best ϑ i ( t ) + c 2 r 2 g best ϑ i ( t ) ϑ i ( t + 1 ) = ϑ i ( t ) + v i ( t + 1 )
where ω is the inertia weight; v i is the velocity of particle; c 1 is the cognitive acceleration coefficient; c 2 is the social acceleration coefficient; r 1 , r 2 are the random exploration factors, such that there is r 1 , r 2 [ 0 , 1 ] ; p i best is the personal best position; and g best is the global best position.
In this system, the convergence threshold e was set to 0.005, the maximum iteration T max was set to 150 with acceleration coefficients c 1 = c 2 = 2.05, and the inertia weight ω was set to 0.7.
After the above analysis, the pseudocode of the PSO-RF algorithm in this paper is shown in Algorithm 1.
Algorithm 1. The pseudocode of the PSO-RF algorithm.
Pseudocode: PSO-RF algorithm for joint angles prediction (python)
# Particle initialization
        NP = 50, t_max = 150, e = 0.005, w = 0.7, c1 = c2 = 2.05  # PSO
        rf_para = {‘n_trees’:[50, 200], ‘d_max’:[5, 30], ‘m_features’:[0.2, 0.8]}  # RF
# Swarm optimization
        particles = [{‘pos’:[randint(n_trees), randint(d_max), random(m_features)],
                              ‘vel’:[0, 0, 0], ‘pbest’:[], ‘pfit’: −inf})
        gbest = {‘pos’:[], ‘fit’: −inf}, prev_fit = −inf
        for i = 1 to t_max:
            for p in particles:
                rf = RandomForest (n_trees = p.pos [0], d_max = p.pos [1], m_features = p.pos [2])
                X_train, X_val, y_train, y_val = split(S, 0.75)
                rf.fit(X_train, y_train)  # Train RF model
        y_pred = rf.predict(X_val)
                R2 = 1 − (sum((y_val−y_pred)**2)/sum((y_val−mean(y_val))**2))  #R2 fitness
                if R2 > p[‘pfit’]: p.update(pfit = R2, pbest = p[‘pos’])
                if R2 > gbest[‘fit’]: gbest.update(fit = R2, pos = p[‘pos’])    # Update bests
# Update particle dynamics
        for p in particles:
            for i = 0 to 2
                r1, r2 = random(), random()
                p[‘vel’][i] = w*p[‘vel’][i] + c1*r1*(p[‘pbest’][i]−p[‘pos’][i]) +
                c2*r2*(gbest[‘pos’][i] − p[‘pos’][i])
                p[‘pos’][i] = clamp(p[‘pos’][i] + p[‘vel’][i], rf_para [i])
            if abs(gbest[‘fit’] − prev_fit) < e: break
            prev_fit = gbest[‘fit’]
        return gbest[‘pos’]

3.3. Analysis of Joint Angles Prediction Results

The drink water action set in Table 1 was used as the test input for the trained PSO-RF model to predict joint angles, and the predicted angles and the actual observed angles are shown in Figure 7.
The trends of the two curves in the figure are almost identical. Angle 4 represents the rotation angle of the elbow joint, whose movement is relatively simple and has the best prediction effect. The remaining angles also achieve good prediction results. The effect analysis of the seven prediction angles as shown in Figure 8.

4. Experiment

4.1. Joint Mapping Analysis

Human arm motions are completed by the shoulder, elbow, and wrist joints, which have different degrees of freedom and work together to operate complex motions. The shoulder joint has three degrees of freedom, the elbow joint has one degree of freedom, and the wrist joint has three degrees of freedom [26,27]. To realize the motion transfer between the human arm and the bionic robot arm, it is necessary to conduct a degree-of-freedom analysis of the bionic robot arm.
The bionic robot arm used in the motion transfer experiment has been designed to mimic the shoulder, elbow, and wrist joints. The 3D model and structural diagram of the bionic robot arm are shown in Figure 9.
In previous work [28] we have solved the kinematics of the hybrid bionic robot arm and it can be simplified into a highly humanoid arm structure, as shown in Figure 10.
The bionic robot arm has a redundant degree of freedom (DOF) at the forearm, which is the same as the radial deviation/ulnar deviation at the wrist joint. After removing the redundant degree of freedom, its actual degrees of freedom are completely consistent with those of the human arm. Through kinematic and DOF analysis the motion transfer between the bionic robot arm and the human arm in the joint space can be realized.

4.2. Arm–Bionic Robot Arm Motion Tranfer Experiment

The experimental platform construction is shown in Figure 11. The participant pasted on the reflective markers in the form of Figure 2a and Figure 5, performing the motions set in Table 1. The Mokov Mocap system captured the trajectory signals and transmitted them to the host Windows computer software XINGYING 3.2. The filtered trajectory signals were processed through the PSO-RF model to generate predicted joint angle profiles. These profiles were converted into control commands of the robotic operation system (ROS), which orchestrated real-time actuation of the bionic robot arm, thus achieving the human arm motion transfer experiment.
To verify the effect of human arm motion transfer we randomly arranged and combined the sub-motions of the five motions in Table 1 to obtain the random experimental motion, as shown in Figure 12.
The sampling frequency of the Mokov Mocap system is 100 Hz, meaning that every 100 frames represent 1 s, and so during the 10 s motion execution the central 8 s interval (1–9 s) was analyzed to perform a comparative assessment of the 3D positional trajectories between human arm and the bionic robot arm, focusing on the elbow and wrist joints, as shown in Figure 13.
To quantitatively evaluate the temporal–spatial consistency between the human arm and the bionic robot arm trajectories, mean-centered dynamic time warping (mean-centered DTW) was employed as a similarity metric. Unlike traditional Euclidean distance measures, mean-centered DTW is robust to temporal misalignments and phase shifts by nonlinearly warping the time axis to minimize the cumulative distance between two sequences [29,30].
Define X = x 1 , x 2 , , x 800 as the trajectory of the elbow joint of the human arm in the z direction and Y = y 1 , y 2 , , y 800 as the trajectory of the elbow joint of the bionic robot arm in the z direction. Subsequently, these two trajectories are subtracted from their own mean values to eliminate the following overall offset:
x ˜ i = x i 1 N i = 1 N x i y ˜ j = y j 1 N j = 1 N y j
The mean-centered DTW distance d Centered-DTW is computed as follows:
d Centered-DTW ( X , Y ) = min W ( i , j W ) d ( x ˜ i , y ˜ j )
where W represents the optimal warping path satisfying boundary, monotonicity, and continuity constraints, and d ( x ˜ i , y ˜ j ) is the Euclidean distance [31].
The system latency is computed by analyzing temporal offsets between corresponding peaks in the human arm trajectory X = x 1 , x 2 , , x 800 and the bionic robot arm trajectory Y = y 1 , y 2 , , y 800 . Peak detection [32] was employed to identify the timestamps of all peaks in these trajectories, which is as follows:
For   human   arm   peaks   X ^ = x ^ 1 , x ^ 2 , , x ^ N For   bionic   robot   arm   peaks   Y ^ = y ^ 1 , y ^ 2 , , y ^ M
If N M , truncate the longer sequence to align the datasets, ensuring one-to-one correspondence between human and bionic robot arm peaks. For each matched peak pair x ^ i , y ^ i , compute the following temporal difference Δ t i :
Δ t i = t x ^ i t y ^ i
Then calculate the mean latency across all matched peaks by performing the following:
Δ t a v g = 1 n i = 1 n Δ t i
where n is the number of matched peak pairs and Δ t a v g is the calculated system latency.
To verify the advances of the proposed PSO-RF algorithm, experiments were conducted using the PSO-RF algorithm, the long short-term memory (LSTM) algorithm [33], the genetic algorithm [34], and the geometric vector method, respectively. Figure 14 compared the mean-centered DTW values and system delays associated with the human arm trajectory in the z-direction of the elbow joint and the trajectory of the bionic robot arm in the z-direction among the above four methods.
As shown in Figure 13 and Figure 14, the proposed PSO-RF algorithm demonstrates high trajectory similarity between the human arm and the bionic robot arm. Quantitative analysis reveals a mean-centered (DTW) distance of 4.2691 mm for elbow joint trajectories along the z-axis between the human arm and the bionic robot arm, with a deviation under 0.01. This metric significantly outperforms comparative methods, including the geometric vector method (16.0067 mm), LSTM (6.9326 mm), and genetic algorithm (5.7862 mm).
While the PSO-RF algorithm exhibits a marginally higher system latency (0.1097 s) than the geometric vector method (0.0815 s) it achieves superior balance between temporal responsiveness and spatial accuracy. The observed latency remains lower than both LSTM (0.1186 s) and genetic algorithm (0.2103 s) implementations. These results confirm that the PSO-RF algorithm effectively reconciles the precision–computational efficiency trade-off, delivering optimal comprehensive performance in motion transfer applications.

5. Discussion

The proposed PSO-RF hybrid algorithm demonstrates significant improvements in joint space mapping accuracy and dynamic compliance for a bionic robot arm. Experimental results indicate that the model achieves high-fidelity prediction of elbow joint angles (R2 = 0.932), surpassing traditional kinematic equivalence methods. The integration of PSO enables adaptive parameter tuning of the random forest model, effectively addressing error accumulation caused by marker occlusion in geometric vector analysis. Furthermore, the low operational latency (0.1097 s) ensures real-time adaptability, which is critical for human–robot collaboration tasks such as object handover or assembly.
The experiments were conducted under controlled laboratory conditions. Although a Kalman filter and an RTS smoother reduced noise and phase lag in motion capture data, the current validation focuses on predefined arm motions and so the complex multi-joint coordination motions were not fully explored, potentially limiting the generalization of the method. Additionally, the training database was established by one participant, which leads to the homogeneous physical parameters. Hence, future research should expand the motion types and the participant cohort to increase the generalization capability of the proposed PSO-RF model.
Despite differing in joint parameters, the human arm and the bionic robot arm share comparable DOF configurations, enabling them to achieve analogous motions through joint space motion transfer. The motion transfer system still exhibited persistent jitters during motion, which may be attributed to improper threshold settings in the filter implementation or the inter-joint data interference. Therefore, future research should focus on improving motion compliance through systematic parameter tuning and noise suppression strategies.

6. Conclusions

This paper developed a motion transfer framework for bionic robot arms by integrating a Mokov Mocap system, geometric vector-based joint angle calculation, and a hybrid PSO-optimized random forest model, which achieved high-fidelity mapping of the joint space and motion transfer from a human arm to a bionic robot arm. The PSO-RF algorithm achieved precise joint angle prediction (R2 = 0.932 for elbow joint angle). Then, a corresponding experimental platform was built to verify the motion transfer method proposed in this paper under various complex human arm motions. The bionic robot arm achieved similar trajectories to the human arm and exhibited a low operational latency of 0.1097 s, enabling high-precision motion transfer between the human arm and bionic robot arm. The superiority of the algorithm is verified through a comparison with existing methods. Future work will focus on further improving the motion transfer adaptability of the bionic robot arm during complex human arm motions by expanding the motion dataset and integrating impedance control. Ultimately, this method provides a scalable foundation for biomimetic robotics, with potential applications in intelligent manufacturing and medical rehabilitation.

Author Contributions

Conceptualization, Y.Z. and P.S.; methodology, H.Z. and G.Z.; software, H.Z. and Y.H.; validation, G.Z. and Y.Z.; formal analysis, P.S., Y.Z. and H.Z.; investigation, Z.W.; resources, P.S. and Y.Z.; data curation, H.Z. and G.Z.; writing—original draft preparation, H.Z., G.Z., and Z.W.; writing—review and editing, Y.Z. and P.S.; visualization, Y.Z. and Y.H.; supervision, P.S.; project administration, Y.Z. and P.S.; funding acquisition, Y.Z. and P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Zhejiang Province (Grant No. LTGY24E050002, LQ23E010003) and the National Natural Science Foundation of China (Grant Nos. 52475034, U21A20122, 52105037).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Zhonghua Wei was employed by the company Zhejiang LINIX Motor Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DMPsDynamic Movement Primitives
EMGElectromyography
HAMPsHuman Arm Motion Patterns
MocapMotion Capture
RTSRauch–Tung–Striebel
RFRandom Forest
PSOParticle Swarm Optimization
DTWDynamic Time Warping

References

  1. Ortiz-Catalan, M.; Zbinden, J.; Millenaar, J.; D’Accolti, D.; Controzzi, M.; Clemente, F.; Cappello, L.; Earley, E.J.; Mastinu, E.; Kolankowska, J.; et al. A highly integrated bionic hand with neural control and feedback for use in daily life. Sci. Robot. 2023, 8, eadf7360. [Google Scholar] [CrossRef] [PubMed]
  2. Toedtheide, A.; Fortunic, E.P.; Kühn, J.; Jensen, E.; Haddadin, S. A transhumeral prosthesis with an artificial neuromuscular system: Sim2real-guided design, modeling, and control. Int. J. Robot. Res. 2024, 43, 942–980. [Google Scholar] [CrossRef]
  3. Licardo, J.T.; Domjan, M.; Orehovacki, T. Intelligent Robotics-A Systematic Review of Emerging Technologies and Trends. Electronics 2024, 13, 542–553. [Google Scholar] [CrossRef]
  4. Blandine, C.G. Anatomie Our le Movement, Tome 1: Introduction à l’Analyse des Techniques Corporelles; Gap: Paris, France, 2005; pp. 50–80. [Google Scholar]
  5. Niu, H.; Zhao, X.; Jin, H.Z.; Zhang, X.L. A Whole-Body Coordinated Motion Control Method for Highly Redundant Degrees of Freedom Mobile Humanoid Robots. Biomimetics 2024, 9, 766. [Google Scholar] [CrossRef]
  6. Hyun-Sook, C. Design & Implementation of a Motion Capture Database Based on Motion Ontologies. J. Korea Multimed. Soc. 2005, 8, 618–632. [Google Scholar]
  7. Kim, M.S. A Study on Overcome of Marker-based Motion Capture Environment. J. Korea Entertain. Ind. Assoc. 2016, 10, 17–25. [Google Scholar] [CrossRef]
  8. Ashhar, K.; Khyam, M.O.; Soh, C.B.; Kong, K.H. A Doppler-tolerant ultrasonic multiple access localization system for human gait analysis. Sensors 2018, 18, 2447. [Google Scholar] [CrossRef]
  9. Laurijssen, D.; Truijen, S.; Saeys, W.; Daems, W.; Steckel, J. An Ultrasonic Six Degrees-of-Freedom Pose Estimation Sensor. IEEE Sens. J. 2017, 17, 151–159. [Google Scholar] [CrossRef]
  10. Baak, A.; Müller, M.; Bharaj, G.; Seidel, H.P.; Theobalt, C. A data-driven approach for real-time full body pose reconstruction from a depth camera. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; Volume 96, pp. 71–98. [Google Scholar]
  11. Dobrian, C.; Bevilacqua, F. Gestural Control of Music Using the Vicon 8 Motion Capture System. In Proceedings of the 2003 Conference on New Interfaces for Musical Expression, Montréal, QC, Canada, 22–24 May 2003; Volume 43, pp. 160–163. [Google Scholar]
  12. Wu, X.L.; Tang, Q.R.; Wang, F.; Guo, R.Q.; Zhu, Q.; Li, S.; Tu, D.; Liu, Q. A Robot-Assisted System for Dental Implantation. In Proceedings of the International Conference on Intelligent Robotics and Applications, Harbin, China, 1–3 August 2022; Springer: Cham, Switzerland, 2022; Volume 64, pp. 15–28. [Google Scholar]
  13. Jia, X.H.; Zhao, B.; Liu, J.Y.; Zhang, S.L. A trajectory planning method for robotic arms based on improved dynamic motion primitives. Ind. Robot 2024, 51, 847–856. [Google Scholar] [CrossRef]
  14. Yu, X.B.; Liu, P.S.; He, W.; Liu, Y.; Chen, Q. Human-Robot Variable Impedance Skills Transfer Learning Based on Dynamic Movement Primitives. IEEE Robot. Autom. Lett. 2023, 7, 6463–6470. [Google Scholar] [CrossRef]
  15. Vuga, R.; Ogrinc, M.; Gams, A.; Petric, T.; Sugimoto, N.; Ude, A.; Morimoto, J. Motion Capture and Reinforcement Learning of Dynamically Stable Humanoid Movement Primitives. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
  16. Zhao, J.; Wang, C.Y.; Xie, B.Y. Human-like motion planning of robotic arms based on human arm motion patterns. Robotica 2022, 41, 259–276. [Google Scholar] [CrossRef]
  17. Sheng, B.; Chen, L.F.; Cheng, J.; Zhang, Y.X.; Hua, Z.K.; Tao, J. A markless 3D human motion data acquisition method based on the binocular stereo vision and lightweight open pose algorithm. Measurement 2024, 225, 113908. [Google Scholar] [CrossRef]
  18. Roweis, S.; Ghahramani, Z. A unifying review of linear Gaussian models. Neural Comput. 1999, 11, 305–345. [Google Scholar] [CrossRef]
  19. Khodarahmi, M.; Maihami, V. A review on Kalman filter models. Arch. Comput. Methods Eng. 2023, 30, 727–747. [Google Scholar] [CrossRef]
  20. Sun, P.; Vasef, M.; Chen, L. Multi-vision-based displacement monitoring using global-local deep deblurring and Rauch-Tung-Striebel smoother. Measurement 2025, 242, 116292. [Google Scholar] [CrossRef]
  21. Zhang, T.Y.; Wang, H.G.; Lv, P.; Pan, X.A.; Wang, D.; Yuan, B.; Yu, H. Real-Time Motion Generation for Robot Manipulators in Complex Dynamic Environments. Adv. Intell. Syst. 2025, 2400738. [Google Scholar] [CrossRef]
  22. Roshan, T.R.; Jafari, M.; Golami, M.; Kazemi, M. Evaluating geometric measurement accuracy based on 3D model reconstruction of nursery tomato plants by Agisoft photoscan software. Comput. Electron. Agric. 2025, 221, 109000. [Google Scholar] [CrossRef]
  23. Daviran, M.; Maghsoudi, A.; Ghezelbash, R. Optimized AI-MPM: Application of PSO for tuning the hyperparameters of SVM and RF algorithms. Comput. Geosci. 2024, 195, 105785. [Google Scholar] [CrossRef]
  24. Shu, Y.R.; Kong, F.M.; He, Y.; Chen, L.H.; Liu, H.; Zan, F.; Lu, X.; Wu, T.; Si, D.; Mao, J.; et al. Machine learning-assisted source tracing in domestic-industrial wastewater: A fluorescence information-based approach. Water Res. 2024, 268, 122618. [Google Scholar] [CrossRef]
  25. Akesson, J.; Toger, J.; Heiberg, E. Random effects during training: Implications for deep learning-based medical image segmentation. Comput. Biol. Med. 2024, 180, 108944. [Google Scholar] [CrossRef]
  26. Billard, A.; Matarić, M.J. Learning human arm movements by imitation: Evaluation of a biologically inspired connectionist architecture. Robot. Auton. Syst. 2001, 37, 145–160. [Google Scholar] [CrossRef]
  27. Ahmed, M.H.; Kutsuzawa, K.; Hayashibe, M. Transhumeral arm reaching motion prediction through deep reinforcement learning-based synthetic motion cloning. Biomimetics 2023, 8, 367. [Google Scholar] [CrossRef] [PubMed]
  28. Sun, P.; Li, Y.B.; Wang, Z.S.; Chen, K.; Chen, B.; Zeng, X.; Zhao, J.; Yue, Y. Inverse displacement analysis of a novel hybrid humanoid robotic arm. Mech. Mach. Theory 2020, 147, 103743. [Google Scholar] [CrossRef]
  29. Sakoe, H.; Chiba, S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 1978, 26, 43–49. [Google Scholar] [CrossRef]
  30. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  31. Székely, G.J.; Rizzo, M.L.; Bakirov, N.K. Measuring and testing dependence by correlation of distances. Ann. Stat. 2007, 35, 2769–2794. [Google Scholar] [CrossRef]
  32. Kharchenko, P.V.; Tolstorukov, M.Y.; Park, P.J. Design and analysis of ChIP-seq experiments for DNA-binding proteins. Nat. Biotechnol. 2008, 26, 1351–1359. [Google Scholar] [CrossRef]
  33. Band, S.S.; Lin, T.J.; Qasem, S.N.; Ameri, R.; Shahmirzadi, D.; Aslam, M.S.; Pai, H.T.; Salwana, E.; Mousavi, A. A deep reinforcement learning approach for wind speed forecasting. Eng. Appl. Comput. Fluid Mech. 2025, 19, 2498355. [Google Scholar] [CrossRef]
  34. Jafari, M.; Ehsani, M.; Hajikarimi, P.; Nejad, F.M. Nonlinear fractional viscoelastic modeling of high-temperature rheological behaviour of SBS and PPA modified asphalt binders. Int. J. Pavement Eng. 2025, 26, 2487614. [Google Scholar] [CrossRef]
Figure 1. Global view of the Mocap system.
Figure 1. Global view of the Mocap system.
Biomimetics 10 00392 g001
Figure 2. Capture points and capture actions: (a) reflective markers; (b) captured actions.
Figure 2. Capture points and capture actions: (a) reflective markers; (b) captured actions.
Biomimetics 10 00392 g002
Figure 3. Before and after x-y-z position filtering of shoulder joints with Kalman filter.
Figure 3. Before and after x-y-z position filtering of shoulder joints with Kalman filter.
Biomimetics 10 00392 g003
Figure 4. Before and after x-y-z position filtering of shoulder joints with RTS smoother.
Figure 4. Before and after x-y-z position filtering of shoulder joints with RTS smoother.
Biomimetics 10 00392 g004
Figure 5. Schematic diagram of the joint angles of the human arm model.
Figure 5. Schematic diagram of the joint angles of the human arm model.
Biomimetics 10 00392 g005
Figure 6. PSO-RF structure of joint angles prediction model.
Figure 6. PSO-RF structure of joint angles prediction model.
Biomimetics 10 00392 g006
Figure 7. Comparison diagram of the predicted and actual observed angles.
Figure 7. Comparison diagram of the predicted and actual observed angles.
Biomimetics 10 00392 g007
Figure 8. Prediction effect plots obtained for seven angles: (a) angle 1; (b) angle 2; (c) angle 3; (d) angle 4; (e) angle 5; (f) angle 6; and (g) angle 7.
Figure 8. Prediction effect plots obtained for seven angles: (a) angle 1; (b) angle 2; (c) angle 3; (d) angle 4; (e) angle 5; (f) angle 6; and (g) angle 7.
Biomimetics 10 00392 g008
Figure 9. Bionic robot arm: (a) 3D model; (b) structural diagram [28].
Figure 9. Bionic robot arm: (a) 3D model; (b) structural diagram [28].
Biomimetics 10 00392 g009
Figure 10. Joint mapping analysis of the bionic robot arm.
Figure 10. Joint mapping analysis of the bionic robot arm.
Biomimetics 10 00392 g010
Figure 11. The experimental platform construction.
Figure 11. The experimental platform construction.
Biomimetics 10 00392 g011
Figure 12. Experiment on motion transfer of bionic robot arm. (i) initial state; (ii) take the cup; (iii) lift the cup; (iv) flat the cup; (v) pour water; (vi) drink water.
Figure 12. Experiment on motion transfer of bionic robot arm. (i) initial state; (ii) take the cup; (iii) lift the cup; (iv) flat the cup; (v) pour water; (vi) drink water.
Biomimetics 10 00392 g012
Figure 13. Trajectory similarity comparison: (a) the elbow joint; and (b) the wrist joint.
Figure 13. Trajectory similarity comparison: (a) the elbow joint; and (b) the wrist joint.
Biomimetics 10 00392 g013
Figure 14. Mean-centered DTW and system latency comparison.
Figure 14. Mean-centered DTW and system latency comparison.
Biomimetics 10 00392 g014
Table 1. Correspondences between motion and joint actions.
Table 1. Correspondences between motion and joint actions.
MotionShoulderElbowWrist
Drink waterFlexion/ExtensionFlexion/ExtensionRadial deviation/Ulnar deviation
Internal/External RotationDorsiflexion/Palmar flexion
Pour waterFlexion/ExtensionFlexion/ExtensionRadial deviation/Ulnar deviation
Pronation/Supination
Draw a circleFlexion/ExtensionFlexion/ExtensionRadial deviation/Ulnar deviation
Adduction/AbductionDorsiflexion/Palmar flexion
Take the cupFlexion/ExtensionFlexion/ExtensionRadial deviation/Ulnar deviation
Move the cupFlexion/ExtensionFlexion/ExtensionRadial deviation/Ulnar deviation
Adduction/Abduction
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Zhang, H.; Zheng, G.; Hong, Y.; Wei, Z.; Sun, P. Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm. Biomimetics 2025, 10, 392. https://doi.org/10.3390/biomimetics10060392

AMA Style

Zheng Y, Zhang H, Zheng G, Hong Y, Wei Z, Sun P. Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm. Biomimetics. 2025; 10(6):392. https://doi.org/10.3390/biomimetics10060392

Chicago/Turabian Style

Zheng, Yuanyuan, Hanqi Zhang, Gang Zheng, Yuanjian Hong, Zhonghua Wei, and Peng Sun. 2025. "Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm" Biomimetics 10, no. 6: 392. https://doi.org/10.3390/biomimetics10060392

APA Style

Zheng, Y., Zhang, H., Zheng, G., Hong, Y., Wei, Z., & Sun, P. (2025). Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm. Biomimetics, 10(6), 392. https://doi.org/10.3390/biomimetics10060392

Article Metrics

Back to TopTop