Next Article in Journal
Energy-Efficient Optimization Method for Timetable Adjusting in Urban Rail Transit
Previous Article in Journal
Mathematical Frameworks for Network Dynamics: A Six-Pillar Survey for Analysis, Control, and Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Error Compensation for Delta Robot Based on Improved PSO-GA-BP Algorithm

1
School of Mechanical and Electrical Engineering, China University of Mining and Technology-Beijing, Beijing 100083, China
2
Jinan Eco-Environmental Monitoring Center of Shandong Province, Jinan 250013, China
3
Intelligent Mining and Robot Research Institute, China University of Mining and Technology-Beijing, Beijing 100083, China
4
Key Laboratory of Intelligent Mining and Robotics, Ministry of Emergency Management, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(13), 2118; https://doi.org/10.3390/math13132118 (registering DOI)
Submission received: 5 May 2025 / Revised: 19 May 2025 / Accepted: 26 June 2025 / Published: 28 June 2025

Abstract

Aiming to address the problem of accuracy degradation in Delta robots caused by machining accuracy, assembly precision, etc., this paper corrects the robot’s driving angles to achieve error compensation and designs a compensation algorithm based on particle swarm optimization (PSO) and BP neural network. In terms of algorithm improvement, the inertia weight and learning factors of the PSO algorithm are optimized to effectively enhance the global search ability and convergence performance of the algorithm. Additionally, the core mechanisms of genetic algorithms, including selection, crossover, and mutation operations, are introduced to improve algorithm diversity, ultimately proposing an improved PSO-GA-BP error compensation algorithm. This algorithm uses the improved PSO-GA algorithm to optimize the optimal correction angles and trains the BP network with the optimized dataset to achieve predictive compensation for other points. The simulation results show that the comprehensive error of the robot after compensation by this algorithm is reduced by 83.8%, verifying its effectiveness in positioning accuracy compensation and providing a new method for the accuracy optimization of parallel robots.

1. Introduction

In recent years, robots have gradually emerged in production lines across various industries, providing significant impetus for intelligent development in these sectors. The Delta parallel robot [1,2], with its superior kinematic and dynamic performance, has been widely applied in material sorting and packaging, electronic product processing and assembly, and related fields [3,4]. Robot positioning accuracy constitutes a primary factor affecting the performance of manufactured products [5]. Most parallel robots consist of stationary and moving platforms connected by intermediate limbs. This complex configuration enhances the load-bearing capacity and improves precision to some extent, yet its highly coupled structure poses challenges for further accuracy enhancement [6,7]. This study focuses on the Delta parallel robot as the research subject.
To enhance robot positioning accuracy, conducting error compensation research on robots is essential. Current mainstream compensation methods can be categorized into two primary types [8]: online error compensation and offline error compensation. Online error compensation [9] requires continuous monitoring of the robot end-effector’s positional status using precision measurement equipment, with real-time dynamic corrections implemented for robot trajectories based on detection data. Cheng et al. [10] introduced a feedforward compensation algorithm based on PID control. Zhang et al. [11] proposed a control strategy characterized by high precision and strong robustness, significantly improving the positional control accuracy and anti-interference capability of industrial robots during online compensation processes. Zhu et al. [12] employed a BP neural network to predict target point errors, concluding that positioning errors exhibit fundamentally continuous variations. Shu et al. [13] utilized a binocular vision system to monitor the end-effector pose in real time and implemented instantaneous corrections, elevating the robot’s positioning accuracy to ±0.2 mm.
Offline error compensation methods can generally be classified into two categories: model-based methods [14] and non-model-based methods [15]. Chen Lifeng et al. [16] proposed a joint angle error compensation strategy using Chebyshev polynomial fitting based on the robot kinematic model, achieving a 68.7% reduction in robot positioning errors. The effectiveness of model-based methods fundamentally relies on establishing precise models, yet the modeling process remains relatively complex, and such methods demonstrate limited efficacy in compensating for certain non-geometric errors. Consequently, the research focus has shifted toward non-model-based compensation approaches that eliminate model dependency while offering enhanced accuracy and adaptability. Among these, neural network-based compensation methods have emerged as predominant solutions. Wang et al. [17] implemented posture error compensation through neural networks, resulting in an 83.99% error reduction for KUKA KR500-3 robots. Li et al. [18] developed a precision enhancement algorithm combining the GPSO algorithm with neural networks, applying this methodology to achieve error prediction and compensation for KUKA drilling robots. The experimental results demonstrate an 87.9% improvement in drilling accuracy through this approach.
In summary, this study conducts error compensation research targeting Delta robots, with actuator angles as the compensation objective. We propose a hybrid intelligent error compensation algorithm integrating the particle swarm optimization (PSO), genetic algorithm (GA), and BP neural network methodologies to enhance robot positioning accuracy.

2. Structure and Parameters of the Delta Robot

The structure of the Delta parallel robot is shown in Figure 1. Three identical kinematic chains are uniformly distributed at 120° intervals to connect the stationary and moving platforms. In each kinematic chain, the active arm is connected to the stationary platform through a revolute joint. The passive arm adopts a parallelogram structure, with all four vertices of this structure configured as spherical joints. This design ensures that the passive arm connects both the active arm and the moving platform through pairs of spherical joints. By driving the three revolute joints of the active arms, the robot achieves high-speed motion in three-dimensional space. The symbols and parameter definitions in Figure 1 are listed in Table 1.
According to the simplified model of the Delta robot, the angle of the connection between the hinge points of the static (dynamic) platform and the origin relative to the X(x) axis is as follows:
φ i = 2 3 π ( i 1 ) , ( i = 1 , 2 , 3 )
The kinematics equation of the Delta robot is as follows:
[ ( r R L A cos θ i ) cos φ i + x ] 2 + [ ( r R L A × cos θ i ) sin φ i + y ] 2 + [ z L A sin θ i ] 2 = L B 2 , ( i = 1 , 2 , 3 )
It can be seen from Equation (2) that the kinematics equation of the robot contains a number of parameters, and the errors of these parameters will affect the positioning accuracy of the robot. In order to analyze the specific influence of each parameter error on the positioning accuracy of the robot, Shang et al. [19] modeled the error and analyzed various error sources. The simulation concluded that the driving angle error is the main factor affecting the position error of the mechanism.

3. Improvement of PSO Algorithm and Error Compensation

3.1. Improvement of PSO Algorithm

For position errors in parallel robots, a static error analysis can be employed to establish the models, with intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and BP neural networks applied for error compensation. Among these, the ant colony algorithm exhibits relatively higher complexity, requiring extensive experimental validation. Conversely, the particle swarm optimization algorithm is more prone to becoming trapped in local optima. Consequently, targeted improvements addressing this limitation are necessary to enhance its global convergence capability.
The particle swarm optimization (PSO) algorithm is inspired by observations of bird flock foraging behavior. In this framework, the optimization target is defined as a D-dimensional vector, and a population of n particles, denoted as X = ( X 1 , X 2 , , X n ) , is initialized. Each particle X i = [ x i 1 , x i 2 , , x i D ] T represents a D-dimensional vector corresponding to a potential solution. A fitness function evaluates the quality of each particle’s current position by calculating its fitness value. For each particle, its individual historical best position is recorded as P i = [ P i 1 , P i 2 , , P i D ] T , while the global best position discovered by the entire swarm is denoted as P g = [ P g 1 , P g 2 , , P g D ] T . Additionally, each particle possesses a velocity vector V i = [ v i 1 , v i 1 , , v i D ] T , which governs its movement direction and magnitude during the iterative search process. Through continuous updates of particle positions and velocities—guided by both individual experience ( P i ) and collective intelligence ( P g )—the algorithm dynamically balances exploration of new regions and exploitation of known optimal solutions, ultimately converging toward globally optimal results.
The specific process of the standard PSO algorithm is as follows:
First, define a fitness function to quantify errors. For the i-th particle, the fitness function can be expressed as follows:
F i = ( x i 1 x i 2 ) 2 + ( y i 1 y i 2 ) 2 + ( z i 1 z i 2 ) 2
where ( x i 1 , y i 1 , z i 1 ) represent the coordinates before compensation and ( x i 2 , y i 2 , z i 2 ) denotes the optimal values identified by the i-th particle, i.e., the coordinate value after compensation.
Then, define the maximum iteration count T , swarm size N , particle dimension D , maximum position X max , and maximum velocity V max . Due to practical considerations in problem-solving, to reduce ineffective searches lacking physical significance, save computation time, and enhance search efficiency, constraints are typically applied to particle positions and velocities by limiting their respective ranges to X max , X max and V max , V max . Subsequently, set the maximum iteration count M > N . Thereafter, initialize particle velocities V i and initialize particle positions X i .
Finally, begin iteration. In each iteration, update the particle’s position and velocity using the following update formulas:
V i k + 1 = ω V i k + c 1 r 1 ( P i k X i k ) + c 2 r 2 ( P g k X i k ) X i k + 1 = X i k + V i k + 1
where V i represents the velocity of the i-th particle; k denotes the current iteration count; ω is the inertia weight; c 1 and c 2 are the learning factors for the population; and r 1 and r 2 are random numbers within the interval (0,1), i.e., generated by the random function r a n d ( 1 ) .
Based on the particle’s position, the individual best value and global best value are updated by comparing the fitness values obtained in each iteration. The individual best value is determined as follows:
F p i = F ( P i )
where F p i represents the historical best fitness value of the i-th particle and P i denotes the position where the i-th particle achieved this optimal fitness value.
The flowchart of the standard particle swarm optimization (PSO) algorithm is shown in Figure 2.
The inertia weight factor ω governs the particle’s retention of its previous velocity, playing a critical role in balancing global exploration (search space coverage) and local exploitation (neighborhood refinement). A larger ω enhances global search capability by encouraging broader exploration, while a smaller ω strengthens local searches by focusing on refined exploitation. To achieve an optimal balance, ω is typically designed as a dynamically varying function throughout iterations. Below, we analyze four representative dynamic adjustment strategies and their performance implications:
ω k = ω i n i ( ω i n i ω e n d ) k T
ω k = ω i n i ( ω i n i ω e n d ) ( k T ) 2
ω k = ω i n i ( ω i n i ω e n d ) [ 2 k T ( k T ) 2 ]
ω k = ω e n d ( ω i n i ω e n d ) T 1 + a k
The inertia weight minimum value is denoted as ω i n i , which also represents the initial value of the inertia weight. The current iteration count is k , the maximum iteration count is T , and the inertia weight maximum value is denoted as ω e n d , which corresponds to the inertia weight value at the maximum iteration. A constant a is introduced to parameterize specific adjustment strategies. The variation curves of the four improved inertia weight factors are illustrated in Figure 3.
The cognitive learning factor c 1 and social learning factor c 2 critically influence the adjustment of particle trajectories, representing the particle’s emphasis on individual experiential knowledge and collective swarm experience, respectively. When c 1 is assigned a larger value, particles exhibit a stronger tendency to search near their own known optimal positions. Conversely, a larger c 2 drives particles to converge toward optimal positions identified by other particles in the swarm.
The enhanced learning factors are designed as follows:
c 1 = c 1 max ( c 1 max c 1 min ) × k 1 T 1 c 2 = c 2 min + ( c 2 max c 1 min ) × k 1 T 1
where c 1 max and c 1 min represent the predefined maximum and minimum values of the cognitive learning factor, respectively; c 2 max and c 2 min denote the predefined maximum and minimum values of the social learning factor, respectively; k indicates the current iteration count; and T signifies the maximum iteration count.
To assess the efficacy of the enhanced PSO algorithm, the widely adopted Rastrigin function from benchmark test functions is selected. This function is characterized by its highly multimodal nature and abundance of local minima, posing a significant challenge to optimization algorithms: the search process is highly prone to trapping the algorithm in local optima. The mathematical expression of the Rastrigin function is as follows:
f x = i = 1 n x i 10 cos 2 π x i + 10
where x i 5.12 , 5.12 ; min f x = f 0 , 0 , , 0 = 0 .
Both the standard PSO algorithm and the improved PSO algorithm were applied to search for the minimum value of the Rastrigin function, with min f x serving as the fitness function. The computational results are shown in Figure 4.
As shown in Figure 4, both the improved PSO algorithm and the standard PSO algorithm successfully find the minimum value of the Rastrigin function. However, it can be observed that between 0 and 10 iterations, the fitness function values of both algorithms exhibit a rapid decreasing trend, but the improved PSO algorithm demonstrates a significantly faster descent rate. Furthermore, the improved PSO algorithm reaches a stable state (i.e., finds the optimal solution) at 20 iterations, while the standard PSO algorithm exhibits trapping in local optima between 10 and 20 iterations, leading to gradual stabilization only after 30 iterations. In conclusion, the improved PSO algorithm exhibits significant superior performance compared to the standard PSO algorithm, not only achieving faster convergence but also greater stability and more effective avoidance of local optima.

3.2. Error Compensation Based on Improved PSO Algorithm

Based on the aforementioned improved PSO algorithm, a simulation experiment for Delta robot error compensation is conducted. Forty randomly selected points in the workspace serve as compensation targets. First, the ideal position coordinates of these 40 points are input into the robot’s inverse kinematics program to obtain 40 sets of ideal actuation angles. Using the error model, combined with the robot’s structural parameters and error source magnitudes, 40 sets of actual position coordinates are calculated. To ensure the effectiveness of compensation, parameters including R , r , L a , L b , and spherical joint clearance are randomly varied within [0, 0.100] mm, while θ 1 , θ 2 , and θ 3 are randomly varied within [0, 0.100°].
Given that actuation angle errors exert the most significant impact on robot positioning accuracy among all error sources, the compensation strategy primarily focuses on compensating for the three actuation angle errors of the robot, that is, D = 3. Using the proposed improved PSO algorithm, the optimal actuation angle errors are sought to minimize the comprehensive end-effector error Δe. The parameters of the improved PSO algorithm are configured as follows: swarm size N = 20 ; maximum iterations T = 50 ; cognitive learning factor maximum c 1 max = 1.5 ; cognitive learning factor minimum c 1 min = 0.8 ; social learning factor maximum c 2 max = 2.5 ; social learning factor minimum c 2 min = 1.5 ; inertia weight minimum ω i n i = 0.4 ; inertia weight maximum ω e n d = 0.9 ; maximum particle velocity v max = 0.25 ; and minimum particle velocity v min = 0.25 .
First, the first set of data was selected and the improved particle swarm optimization (PSO) algorithm was utilized to optimize the robot’s actuation angle errors, with the fitness function defined as follows:
F i t n e s s = e = X 2 + Y 2 + Z 2
The iteration curve of its fitness function is shown in Figure 5. After 10 iterations, the results stabilized.
The 40 groups of data records before and after compensation are shown in Table 2. The results before and after compensation are presented in Figure 6. The average comprehensive error values of the 40 datasets before and after compensation are recorded, denoted as e ¯ , along with the average error values in the X , Y , and Z directions, denoted as X ¯ , Y ¯ , Z ¯ , as shown in Table 3.
As shown in Figure 6 and Table 3, after applying the improved PSO algorithm for optimization compensation, the errors of most data points are reduced to below 0.050 mm. The average comprehensive error decreases e ¯ from 0.086 mm before compensation to 0.022 mm after compensation, representing a 74.4% reduction. These results demonstrate the algorithm’s effectiveness in reducing the robot’s overall comprehensive error. However, it is observed that the compensation effect is negligible or even approaches zero for certain data points, as in the bolded sets of data in Table 2, indicating instability in the improvement. Additionally, the average errors in the X and Y directions ( X ¯ , Y ¯ ) are reduced by 36.3% and 10.7%, respectively. However, in the Z direction, the average error Z ¯ increases due to error accumulation along the Z-axis. Therefore, further algorithm optimization is required to enhance stability.

4. Improved PSO-GA Algorithm and Error Compensation

4.1. Improved PSO-GA Algorithm

Among various intelligent algorithms, the genetic algorithm (GA) demonstrates superior capabilities in maintaining population diversity and conducting global search. Unlike the PSO algorithm, its core advantages stem from competitive selection based on fitness values, chromosomal crossover, and chromosomal mutation. These operations facilitate the population’s convergence toward the global optimum while preserving diversity [20]. Inspired by the performance enhancement potential of algorithm hybridization, this study integrates GA’s selection, mutation, and crossover operators into the PSO framework. This integration aims to further improve the algorithm’s global search capability and prevent entrapment in local optima.
When introducing the genetic algorithm’s crossover mechanism into the PSO algorithm, the crossover probability is appropriately increased during the initial iteration phase to enhance population diversity by promoting information recombination among different individuals. In the later iteration phases, as individuals become more similar, maintaining a high crossover rate may lead to increased distance between newly generated individuals and the optimal solution, thereby reducing the proportion of high-fitness individuals in the population. Therefore, the crossover probability should be reduced during the later search stages. The improved crossover probability formula is defined as:
P c = P c i n i P c i n i P c min k T 2
where P c i n i denotes the initial crossover probability; k represents the current iteration count; and T indicates the maximum iteration count.
When introducing the mutation operator into the particle swarm optimization algorithm, a higher mutation probability is typically applied during the initial phase to facilitate rapid global search. As the iteration progresses and approaches the optimal solution, the mutation probability gradually decreases to maintain the stability of high-quality solutions and accelerate convergence. Therefore, the following adaptive mutation rate is adopted:
P m = P m max + P m min 2 P m max P m min 2 sin f a v g f f a v g f min π 2 f f a v g P m = 0.5 + 0.1 f f a v g f > f a v g
where P m max and P m min denote the maximum mutation probability and minimum mutation probability, respectively; f represents the fitness value of the current individual; and f a v g and f min indicate the average fitness value and minimum fitness value of the current population, respectively.
Based on the aforementioned research, a PSO-GA hybrid optimization algorithm is designed. By incorporating the selection, crossover, and mutation mechanisms of the genetic algorithm, it achieves rapid convergence and enhanced global search capabilities. The flowchart is shown in Figure 7.
To validate the effectiveness of the proposed improved PSO-GA algorithm, the Rastrigin function defined in Equation (11) is similarly adopted as the test function, with the fitness function remaining min f x . The computational results obtained using both the improved PSO algorithm and the improved PSO-GA algorithm are shown in Figure 8.
As shown in Figure 8, the improved PSO-GA algorithm exhibits significant performance advantages compared to the improved PSO algorithm: its fitness curve demonstrates rapid decay characteristics during the initial iteration phase, entering a stable state after 20 iterations; in contrast, although the improved PSO algorithm eventually achieves convergence, its descent process is relatively slower, requiring more iterations to stabilize. By comprehensively considering the algorithm’s convergence rate, convergence accuracy, and output stability, the improvements are confirmed to be effective.

4.2. Error Compensation Based on Improved PSO-GA Algorithm

The improved PSO-GA algorithm is applied to perform error compensation on 40 randomly selected datasets, with parameters configured as follows: swarm size N = 20 ; maximum iterations T = 50 ; cognitive learning factor maximum c 1 max = 1.5 ; cognitive learning factor minimum c 1 min = 0.8 ; social learning factor maximum c 2 max = 2.5 ; social learning factor minimum c 2 min = 1.5 ; inertia weight minimum ω i n i = 0.4 ; inertia weight maximum ω e n d = 0.9 ; maximum particle velocity v max = 0.25 ; minimum particle velocity v min = 0.25 ; initial crossover probability P c i n i = 0.9 ; minimum crossover probability P c min = 0.6 ; maximum probabilities of variation P m max = 0.2 ; and minimum probabilities of variation P m min = 0.01 .
The 40 groups of data records before and after compensation are shown in Table 4. The results before and after compensation are presented in Figure 9. The average comprehensive error values of the 40 datasets before and after compensation are recorded, denoted as e ¯ , along with the average error values in the X , Y , and Z directions, denoted as X ¯ , Y ¯ , Z ¯ , as shown in Table 5.
As shown in Figure 9 and Table 5., the improved PSO-GA algorithm demonstrates significant advantages in error compensation. The uncompensated comprehensive error exhibits large fluctuations, with a maximum value approaching 0.200 mm. After applying the improved PSO-GA algorithm, the comprehensive error is substantially reduced, with most data points achieving errors below 0.050 mm. Compared to the previous improved PSO algorithm, the improved PSO-GA algorithm not only further reduces the comprehensive error—average comprehensive error decreases from 0.132 mm to 0.022 mm (an 83.3% reduction)—but also shows that the error improvement magnitude closely aligns with the initial error values, indicating more pronounced compensation effects. Furthermore, the standard deviation of the post-compensation comprehensive error decreases from 0.0193 to 0.0135, demonstrating enhanced compensation stability. These results conclusively show that the improved PSO-GA algorithm outperforms the improved PSO algorithm in robotic error compensation, delivering stronger performance and stability to effectively enhance robot positioning accuracy.
Although the aforementioned improved PSO-GA algorithm exhibits excellent performance in single-point error compensation, robotic motion requires compensation across the entire workspace rather than isolated points. To enhance the global compensation capability, this study proposes a hybrid compensation framework integrating particle swarm optimization with neural networks. The approach leverages PSO’s optimization capability for single-point compensation and then employs neural networks to learn the compensation patterns from these single-point results, ultimately enabling compensation prediction for unknown points.

5. Error Compensation Based on Improved PSO-GA-BP Algorithm

The BP (Back Propagation) neural network is a type of multi-layer feedforward neural network model. In neural network design, the number of hidden layers and node quantity are critical factors influencing the network’s performance. This section adopts a three-layer BP neural network architecture with one hidden layer. The number of hidden layer nodes is determined using the following empirical formula:
p = m + n + a
where p represents the number of hidden layer nodes; m and n denote the number of input layer nodes and output layer nodes, respectively; and a is an adjustment parameter, taking an integer value between 5 and 10. For the error compensation problem addressed in this study, the input parameters include actual coordinate points; thus, the number of input layer nodes is 3. The output quantities are actuation angle compensation values, so the number of output layer nodes is also 3. According to the empirical formula, the number of hidden layer nodes should fall within the range of 8 to 13.
The flowchart is shown in Figure 10.
To enable the BP neural network to compensate for robot errors, a dataset must be collected and the neural network trained. First, 5000 sets of coordinate values are randomly generated within the workspace and input into the error model, where all error sources are randomly selected within defined ranges following uniform distributions to ensure dataset diversity and generality. Positional errors are then calculated for each dataset. Subsequently, the improved PSO-GA algorithm is applied to optimize these 5000 datasets, deriving optimal actuation angle compensation values. The original coordinates and optimal compensation values are combined to form a 6-dimensional dataset (3D coordinates + 3D compensation angles). This dataset is split into a training set and a test set at a 4:1 ratio. The neural network is trained using the training set, with the comprehensive error as the objective function, and validated through predictions on the test set. To demonstrate compensation effectiveness, 40 predicted datasets are analyzed. The neural network parameters are configured as detailed in Table 6.
After configuring the neural network parameters, the neural network is trained using the training set, with the training effect illustrated in Figure 11. During the initial training phase (iterations 0–200), the mean squared error (MSE) decreases rapidly. Since the training met the precision requirement of 0.0025, the process was terminated early at iteration 3744.
The trained network is used for prediction on the test set. To visually demonstrate the network’s prediction performance, 40 datasets from the test set are selected to record their uncompensated errors, errors after compensation by the improved PSO-GA algorithm, and errors after compensation predicted by the BP network, as shown in Figure 12 and Table 7.
As shown in Figure 12 and Table 7, the errors after predicted compensation closely match the errors after optimized compensation by the improved PSO-GA algorithm (i.e., the prediction targets). In most datasets, the predicted compensation even achieves smaller errors. Among the 40 random datasets, the predicted compensation reduces errors by 83.8% compared to the uncompensated errors, outperforming the 81.7% reduction from optimized compensation alone. Additionally, the standard deviation of the predicted compensation errors is slightly lower than that of the optimized compensation errors. These results demonstrate that the improved PSO-GA-BP error compensation algorithm—combining optimized compensation via the improved PSO-GA algorithm with BP neural network predictions—effectively reduces robotic end-effector errors and enhances positioning accuracy.

6. Conclusions

Given the significant impact of actuation angle errors on positioning accuracy, an improved PSO-GA-BP error compensation algorithm targeting actuation angle compensation is proposed. This algorithm enhances the global search capability and convergence performance by modifying the inertia weight and learning factors of the particle swarm optimization algorithm. Additionally, it improves population diversity through the introduction of selection, crossover, and mutation mechanisms from the genetic algorithm. Finally, the predictive capability of the BP neural network is utilized to achieve global error prediction and compensation. The simulation results demonstrate that the algorithm reduces the robot’s comprehensive error by 83.8%, significantly improving the positioning accuracy, thereby validating its effectiveness and applicability. This approach provides a novel solution for error compensation in parallel robots.

Author Contributions

K.Y. data curation, software, writing—original draft, methodology; Z.P. data curation, software, writing—original draft, methodology; L.Z. editing and methodology; Q.L. writing—review and editing, supervision; D.S. writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (52174154).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare they have no conflicts of interest to report regarding the present study.

References

  1. Gündoğdu, F.K.; Kahraman, C. A novel spherical fuzzy QFD method and its application to the linear delta robot technology development. Eng. Appl. Artif. Intell. 2020, 87, 103348. [Google Scholar] [CrossRef]
  2. Carabin, G.; Scalera, L.; Wongratanaphisan, T.; Vidoni, R. An energy-efficient approach for 3D printing with a Linear Delta Robot equipped with optimal springs. Robot. Comput.-Integr. Manuf. 2021, 67, 102045. [Google Scholar] [CrossRef]
  3. Yang, H.L.; Chen, L.; Ma, Z.B.; Chen, M.; Zhong, Y.; Deng, F.; Li, M. Computer vision-based high-quality tea automatic plucking robot using Delta parallel manipulator. Comput. Electron. Agric. 2021, 181, 105946. [Google Scholar] [CrossRef]
  4. Wang, K.; Li, J.; Shen, H.P.; You, J.; Yang, T. Inverse dynamics of A 3-DOF parallel mechanism based on analytical forward kinematics. Chin. J. Mech. Eng. 2022, 35, 119. [Google Scholar] [CrossRef]
  5. Yang, Y.B.; Wang, M.X. Modeling and analysis of position accuracy reliability of R(RPS&RP)& 2-UPS parallel mechanism. Chin. J. Mech. Eng. 2023, 59, 62–72. [Google Scholar]
  6. Wang, M.H.; Wang, M.X. Dynamic Modeling and Performance Evaluation of a New Five-DOF Hybrid Robot. Chin. J. Mech. Eng. 2023, 59, 63–75. [Google Scholar]
  7. Huang, Z. The parallel robot manipulator and its mechanism theory. J. Yanshan Univ. 1998, 22, 13–27. [Google Scholar]
  8. Zhang, T.; Sun, H.; Peng, F.; Tang, X.; Yan, R.; Deng, R. An online prediction and compensation method for robot position errors embedded with error-motion correlation. Measurement 2024, 234, 114866. [Google Scholar] [CrossRef]
  9. Zhang, J.; Jiang, S.J.; Chi, C.C. Kinematic calibration of a 2UPR&2RPS redundantly actuated parallel robot. J. Mech. Eng. 2021, 57, 62–70. [Google Scholar]
  10. Cheng, X.; Tu, X.; Zhou, Y.; Zhou, R. Active disturbance rejection control of multi-joint industrial robots based on dynamic feedforward. Electronics 2019, 8, 591. [Google Scholar] [CrossRef]
  11. Zhang, Z.; Guo, K.; Sun, J. High-precision Closed-loop Robust Control of Industrial Robots Based on Disturbance Observer. J. Mech. Eng. 2022, 58, 62–70. [Google Scholar]
  12. Zhu, W.; Li, G.; Dong, H.; Ke, Y. Positioning error compensation on two-dimensional manifold for robotic machining. Robot. Comput.-Integr. Manuf. 2019, 59, 394–405. [Google Scholar] [CrossRef]
  13. Shu, T.T.; Gharaaty, S.; Xie, W.F.; Joubair, A.; Bonev, I.A. Dynamic path tracking of industrial robots with high accuracy using photogrammetry sensor. IEEE/ASME Trans. Mechatron. 2018, 23, 1159–1170. [Google Scholar] [CrossRef]
  14. Wang, Y.; Liu, L.; Chen, J.; Ju, F.; Chen, B.; Wu, H. Practical robust control of cable-driven robots with feedforward compensation. Adv. Eng. Softw. 2020, 145, 102801. [Google Scholar] [CrossRef]
  15. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl.-Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  16. Chen, L.F.; Lin, J.Y.; Wang, L.; Lin, J.; Qina, J.; Wang, B. Joint Angle Error Compensation and Kinematic Calibration of Six-axis Serial Robots. Acta Metrol. Sin. 2024, 45, 1753–1761. [Google Scholar] [CrossRef]
  17. Wang, W.; Tian, W.; Liao, W.; Li, B.; Hu, J. Error compensation of industrial robot based on deep belief network and error similarity. Robot. Comput.-Integr. Manuf. 2022, 73, 102220. [Google Scholar] [CrossRef]
  18. Bo Li Wei, T.; Zhang, C.; Hua, F.; Cui, G.; Li, Y. Positioning error compensation of an industrial robot using neural networks and experimental study. Chin. J. Aeronaut. 2022, 35, 346–360. [Google Scholar]
  19. Shang, D.; Li, Y.; Liu, Y.; Cui, S. Research on the motion error analysis and compensation strategy of the delta robot. Mathematics 2019, 7, 411. [Google Scholar] [CrossRef]
  20. El-Shafiey, M.G.; Hagag, A.; El-Dahshan, E.S.A.; Ismail, M.A. A hybrid GA and PSO optimized approach for heart-disease prediction based on random forest. Multimed. Tools Appl. 2022, 81, 18155–18179. [Google Scholar] [CrossRef]
Figure 1. Structural diagram of Delta robot.
Figure 1. Structural diagram of Delta robot.
Mathematics 13 02118 g001
Figure 2. Flowchart of standard particle swarm optimization algorithm.
Figure 2. Flowchart of standard particle swarm optimization algorithm.
Mathematics 13 02118 g002
Figure 3. Trend diagram of the strategy change of the inertia weight factor.
Figure 3. Trend diagram of the strategy change of the inertia weight factor.
Mathematics 13 02118 g003
Figure 4. Optimization process of the Rastrigin function under two optimization algorithms.
Figure 4. Optimization process of the Rastrigin function under two optimization algorithms.
Mathematics 13 02118 g004
Figure 5. Error compensation curve based on the improved PSO algorithm.
Figure 5. Error compensation curve based on the improved PSO algorithm.
Mathematics 13 02118 g005
Figure 6. Effect diagram of error compensation based on the improved PSO algorithm.
Figure 6. Effect diagram of error compensation based on the improved PSO algorithm.
Mathematics 13 02118 g006
Figure 7. Flowchart of the improved PSO-GA algorithm.
Figure 7. Flowchart of the improved PSO-GA algorithm.
Mathematics 13 02118 g007
Figure 8. Calculation result diagram of the improved PSO algorithm and the improved PSO-GA algorithm.
Figure 8. Calculation result diagram of the improved PSO algorithm and the improved PSO-GA algorithm.
Mathematics 13 02118 g008
Figure 9. Effect diagram of error compensation based on the improved PSO-GA algorithm.
Figure 9. Effect diagram of error compensation based on the improved PSO-GA algorithm.
Mathematics 13 02118 g009
Figure 10. BP network training flowchart.
Figure 10. BP network training flowchart.
Mathematics 13 02118 g010
Figure 11. Neural network training performance.
Figure 11. Neural network training performance.
Mathematics 13 02118 g011
Figure 12. Error compensation result diagram based on the PSO-GA-BP algorithm.
Figure 12. Error compensation result diagram based on the PSO-GA-BP algorithm.
Mathematics 13 02118 g012
Table 1. Delta robot model parameters.
Table 1. Delta robot model parameters.
SymbolParameterDimensions (mm)Parameter Meaning
AiBiLA400Driving arm length
BiCiLB950Follower arm length
OAiR126Static platform equilateral triangle outer circle radius
PCir51Moving platform equilateral triangle outer circle radius
θiθi-The angle of the active arm relative to the static platform
φiφi-The angle between the line connecting Ai (Ci), O (P), and the X (x)-axis
Table 2. Data coordinates and error values compensated by the improved PSO algorithm.
Table 2. Data coordinates and error values compensated by the improved PSO algorithm.
No.Ideal CoordinatesOriginal ErrorError After CompensationAmplitude of Compensation
XYZ
1373.579170.695921.1050.1030.0190.083
2−130.665−64.2651016.4870.0600.0580.003
3−143.390138.113807.7410.0680.0520.016
4−191.862−158.974927.5350.0660.0060.061
5−491.601−391.6361012.7680.1600.0660.094
6−113.576−245.030979.7490.1080.0460.062
7−485.633−70.957826.6910.1230.0310.092
8−123.761242.067624.2070.0900.0090.081
9−323.206205.177867.5750.1120.0330.079
10313.920170.478706.4650.1530.0920.062
11−180.1507.215714.6230.0710.0090.061
12−86.731−117.723710.0110.1020.0330.069
13−10.453471.915764.2790.0830.0070.076
14−107.376−319.887753.5920.0840.0200.063
15−198.488275.178999.9160.0850.0110.074
16127.753−214.759991.1180.1220.0240.098
17−439.073−137.238947.9990.0950.0150.079
18−66.384−272.029791.7850.0290.0270.003
19323.687278.202934.9810.1030.0160.087
2061.928−150.0591022.0580.0410.0380.003
2169.329−329.302697.4580.0960.0330.063
22−378.689−56.649679.4860.0500.0430.008
23−127.555266.606910.7050.0500.0450.005
2454.771−438.9061005.7270.0880.0110.077
25478.074−132.360806.4930.0580.0570.001
26−260.313−156.7201011.6040.1150.0360.079
2726.413468.798973.8090.0720.0030.069
28451.687−157.433968.6530.0860.0170.069
29−325.454−390.577886.8070.0790.0130.065
30−124.531−414.721705.2180.0860.0170.068
31−203.186−19.544914.6830.0320.0240.008
32−392.908−278.712982.4630.0950.0260.069
33−148.31098.696778.6560.0890.0140.074
34466.993−341.094925.8890.1040.0190.085
3593.87872.823915.7150.0680.0020.065
3696.030399.584979.8050.0950.0190.076
37−193.818−384.143856.6020.0980.0380.060
38−267.529474.118949.2280.0590.0590.000
39111.725−197.061913.7220.0300.0150.016
Table 3. Error data before and after compensation by the improved PSO algorithm.
Table 3. Error data before and after compensation by the improved PSO algorithm.
Mean Error of 40 Sets of Data Δ e ¯ Δ X ¯ Δ Y ¯ Δ Z ¯
Initial error value/(mm)0.0860.0110.0280.003
Compensated error value/(mm)0.0220.0070.0250.008
Standard deviation of comprehensive error after compensation: 0.0193
Table 4. Data coordinates and error values compensated by the improved PSO-GA algorithm.
Table 4. Data coordinates and error values compensated by the improved PSO-GA algorithm.
No.Ideal CoordinatesOriginal ErrorError After CompensationAmplitude of Compensation
XYZ
1−479.008197.144658.6530.15330.02970.1236
2−179.081−13.810874.4680.14550.02590.1196
3−412.873378.845887.1930.14420.01990.1244
4−350.089−313.400625.2560.13120.00780.1234
5−423.676−209.984908.9410.07400.02590.0481
6−132.179−138.328813.4080.07500.02310.0519
7−347.939345.759654.4000.14410.02380.1203
8−290.682−290.8391016.2560.13400.01190.1221
9−39.627433.050965.9890.13780.02020.1176
10−360.886−38.199735.5060.18170.02150.1602
11−327.350335.781966.2140.13180.00470.1271
12−162.265168.803974.5700.14080.00900.1317
13−312.657−214.606746.9090.06930.02220.0471
14−465.965107.3951019.1210.14210.01800.1241
15−365.776−14.993867.8930.14990.03390.1160
16149.188−227.663831.4940.07710.02290.0541
17−396.988130.714690.8240.07680.02190.0549
18−230.745441.374900.9660.13570.01150.1242
19−273.912487.273942.2120.13650.01550.1210
20−312.448191.238798.3240.12680.01020.1166
21−376.623−24.2971016.4520.15940.01370.1457
22−333.949293.0531000.0930.15660.02160.1350
23−444.808−231.365797.8130.15380.02310.1307
24−473.303404.580954.5640.19050.02110.1694
25−423.732−343.207910.3400.19800.03180.1662
26398.557−2.554907.3390.06180.01570.0461
27−428.205−134.441900.4920.15580.02880.1270
28−259.615−358.083775.8770.15010.02790.1222
298.353402.056974.2030.21090.08090.1300
3081.792−103.117792.2910.10930.05390.0555
31238.873−376.319906.8110.08230.03540.0469
32−22.259−453.157825.9920.16100.02920.1318
33−132.647−459.990815.9420.13900.00400.1350
34−348.641−427.825996.5020.13770.01950.1182
35−147.779−242.677864.4840.14100.00700.1340
36−494.57199.856964.8510.14220.01420.1280
37−426.651498.922920.4230.13580.00910.1268
38375.850286.586805.1500.06730.01110.0561
39388.683−473.1321019.4230.14360.02640.1172
4061.013115.2311022.0040.0790.0220.057
Table 5. Error data before and after compensation by the improved PSO-GA algorithm.
Table 5. Error data before and after compensation by the improved PSO-GA algorithm.
Mean Error of 40 Sets of Data Δ e ¯ Δ X ¯ Δ Y ¯ Δ Z ¯
Initial error value/(mm)0.1320.0120.0210.005
Compensated error value/(mm)0.0220.0080.0170.011
Standard deviation of comprehensive error after compensation: 0.0135
Table 6. Parameters of BP neural network.
Table 6. Parameters of BP neural network.
SettingsParameters
Number of input layer nodes3
Number of output layer nodes3
Number of hidden layer nodes10
Iteration number2000
Learning rate0.01
Training objective0.002
Activation functiontansig
Training functiontrainrp
Table 7. Error data before and after compensation by the improved PSO-GA-BP algorithm.
Table 7. Error data before and after compensation by the improved PSO-GA-BP algorithm.
Comprehensive ErrorError Before CompensatedError After Optimization CompensationError After Prediction Compensation
Average error value of 40 groups of data/(mm)0.1420.0260.023
Standard deviation of 40 groups of data/(mm)0.0200.0160.014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, K.; Pan, Z.; Zheng, L.; Li, Q.; Shang, D. Error Compensation for Delta Robot Based on Improved PSO-GA-BP Algorithm. Mathematics 2025, 13, 2118. https://doi.org/10.3390/math13132118

AMA Style

Yang K, Pan Z, Zheng L, Li Q, Shang D. Error Compensation for Delta Robot Based on Improved PSO-GA-BP Algorithm. Mathematics. 2025; 13(13):2118. https://doi.org/10.3390/math13132118

Chicago/Turabian Style

Yang, Kaiwen, Zhan Pan, Linlin Zheng, Qinwen Li, and Deyong Shang. 2025. "Error Compensation for Delta Robot Based on Improved PSO-GA-BP Algorithm" Mathematics 13, no. 13: 2118. https://doi.org/10.3390/math13132118

APA Style

Yang, K., Pan, Z., Zheng, L., Li, Q., & Shang, D. (2025). Error Compensation for Delta Robot Based on Improved PSO-GA-BP Algorithm. Mathematics, 13(13), 2118. https://doi.org/10.3390/math13132118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop