3.1. Improvement of PSO Algorithm
For position errors in parallel robots, a static error analysis can be employed to establish the models, with intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and BP neural networks applied for error compensation. Among these, the ant colony algorithm exhibits relatively higher complexity, requiring extensive experimental validation. Conversely, the particle swarm optimization algorithm is more prone to becoming trapped in local optima. Consequently, targeted improvements addressing this limitation are necessary to enhance its global convergence capability.
The particle swarm optimization (PSO) algorithm is inspired by observations of bird flock foraging behavior. In this framework, the optimization target is defined as a D-dimensional vector, and a population of n particles, denoted as , is initialized. Each particle represents a D-dimensional vector corresponding to a potential solution. A fitness function evaluates the quality of each particle’s current position by calculating its fitness value. For each particle, its individual historical best position is recorded as , while the global best position discovered by the entire swarm is denoted as . Additionally, each particle possesses a velocity vector , which governs its movement direction and magnitude during the iterative search process. Through continuous updates of particle positions and velocities—guided by both individual experience () and collective intelligence ()—the algorithm dynamically balances exploration of new regions and exploitation of known optimal solutions, ultimately converging toward globally optimal results.
The specific process of the standard PSO algorithm is as follows:
First, define a fitness function to quantify errors. For the
i-th particle, the fitness function can be expressed as follows:
where
represent the coordinates before compensation and
denotes the optimal values identified by the
i-th particle, i.e., the coordinate value after compensation.
Then, define the maximum iteration count , swarm size , particle dimension , maximum position , and maximum velocity . Due to practical considerations in problem-solving, to reduce ineffective searches lacking physical significance, save computation time, and enhance search efficiency, constraints are typically applied to particle positions and velocities by limiting their respective ranges to and . Subsequently, set the maximum iteration count . Thereafter, initialize particle velocities and initialize particle positions .
Finally, begin iteration. In each iteration, update the particle’s position and velocity using the following update formulas:
where
represents the velocity of the
i-th particle;
k denotes the current iteration count;
is the inertia weight;
and
are the learning factors for the population; and
and
are random numbers within the interval (0,1), i.e., generated by the random function
.
Based on the particle’s position, the individual best value and global best value are updated by comparing the fitness values obtained in each iteration. The individual best value is determined as follows:
where
represents the historical best fitness value of the
i-th particle and
denotes the position where the
i-th particle achieved this optimal fitness value.
The flowchart of the standard particle swarm optimization (PSO) algorithm is shown in
Figure 2.
The inertia weight factor
governs the particle’s retention of its previous velocity, playing a critical role in balancing global exploration (search space coverage) and local exploitation (neighborhood refinement). A larger
enhances global search capability by encouraging broader exploration, while a smaller
strengthens local searches by focusing on refined exploitation. To achieve an optimal balance,
is typically designed as a dynamically varying function throughout iterations. Below, we analyze four representative dynamic adjustment strategies and their performance implications:
The inertia weight minimum value is denoted as
, which also represents the initial value of the inertia weight. The current iteration count is
, the maximum iteration count is
, and the inertia weight maximum value is denoted as
, which corresponds to the inertia weight value at the maximum iteration. A constant
is introduced to parameterize specific adjustment strategies. The variation curves of the four improved inertia weight factors are illustrated in
Figure 3.
The cognitive learning factor and social learning factor critically influence the adjustment of particle trajectories, representing the particle’s emphasis on individual experiential knowledge and collective swarm experience, respectively. When is assigned a larger value, particles exhibit a stronger tendency to search near their own known optimal positions. Conversely, a larger drives particles to converge toward optimal positions identified by other particles in the swarm.
The enhanced learning factors are designed as follows:
where
and
represent the predefined maximum and minimum values of the cognitive learning factor, respectively;
and
denote the predefined maximum and minimum values of the social learning factor, respectively;
indicates the current iteration count; and
signifies the maximum iteration count.
To assess the efficacy of the enhanced PSO algorithm, the widely adopted Rastrigin function from benchmark test functions is selected. This function is characterized by its highly multimodal nature and abundance of local minima, posing a significant challenge to optimization algorithms: the search process is highly prone to trapping the algorithm in local optima. The mathematical expression of the Rastrigin function is as follows:
where
;
.
Both the standard PSO algorithm and the improved PSO algorithm were applied to search for the minimum value of the Rastrigin function, with
serving as the fitness function. The computational results are shown in
Figure 4.
As shown in
Figure 4, both the improved PSO algorithm and the standard PSO algorithm successfully find the minimum value of the Rastrigin function. However, it can be observed that between 0 and 10 iterations, the fitness function values of both algorithms exhibit a rapid decreasing trend, but the improved PSO algorithm demonstrates a significantly faster descent rate. Furthermore, the improved PSO algorithm reaches a stable state (i.e., finds the optimal solution) at 20 iterations, while the standard PSO algorithm exhibits trapping in local optima between 10 and 20 iterations, leading to gradual stabilization only after 30 iterations. In conclusion, the improved PSO algorithm exhibits significant superior performance compared to the standard PSO algorithm, not only achieving faster convergence but also greater stability and more effective avoidance of local optima.
3.2. Error Compensation Based on Improved PSO Algorithm
Based on the aforementioned improved PSO algorithm, a simulation experiment for Delta robot error compensation is conducted. Forty randomly selected points in the workspace serve as compensation targets. First, the ideal position coordinates of these 40 points are input into the robot’s inverse kinematics program to obtain 40 sets of ideal actuation angles. Using the error model, combined with the robot’s structural parameters and error source magnitudes, 40 sets of actual position coordinates are calculated. To ensure the effectiveness of compensation, parameters including , , , , and spherical joint clearance are randomly varied within [0, 0.100] mm, while , , and are randomly varied within [0, 0.100°].
Given that actuation angle errors exert the most significant impact on robot positioning accuracy among all error sources, the compensation strategy primarily focuses on compensating for the three actuation angle errors of the robot, that is, D = 3. Using the proposed improved PSO algorithm, the optimal actuation angle errors are sought to minimize the comprehensive end-effector error Δe. The parameters of the improved PSO algorithm are configured as follows: swarm size ; maximum iterations ; cognitive learning factor maximum ; cognitive learning factor minimum ; social learning factor maximum ; social learning factor minimum ; inertia weight minimum ; inertia weight maximum ; maximum particle velocity ; and minimum particle velocity .
First, the first set of data was selected and the improved particle swarm optimization (PSO) algorithm was utilized to optimize the robot’s actuation angle errors, with the fitness function defined as follows:
The iteration curve of its fitness function is shown in
Figure 5. After 10 iterations, the results stabilized.
The 40 groups of data records before and after compensation are shown in
Table 2. The results before and after compensation are presented in
Figure 6. The average comprehensive error values of the 40 datasets before and after compensation are recorded, denoted as
, along with the average error values in the
,
, and
directions, denoted as
,
,
, as shown in
Table 3.
As shown in
Figure 6 and
Table 3, after applying the improved PSO algorithm for optimization compensation, the errors of most data points are reduced to below 0.050 mm. The average comprehensive error decreases
from 0.086 mm before compensation to 0.022 mm after compensation, representing a 74.4% reduction. These results demonstrate the algorithm’s effectiveness in reducing the robot’s overall comprehensive error. However, it is observed that the compensation effect is negligible or even approaches zero for certain data points, as in the bolded sets of data in
Table 2, indicating instability in the improvement. Additionally, the average errors in the
and
directions (
,
) are reduced by 36.3% and 10.7%, respectively. However, in the Z direction, the average error
increases due to error accumulation along the Z-axis. Therefore, further algorithm optimization is required to enhance stability.