Next Article in Journal
Bionic Design Based on McKibben Muscles and Elbow Flexion and Extension Assist Device
Previous Article in Journal
A Novel Evaluation Method for Vibration Coupling of Complex Rotor–Stator Systems in Aeroengines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SO-PSO-ILC: An Innovative Hybrid Algorithm for Precise Robotic Arm Trajectory Tracking

1
School of Intelligent Manufacturing, Huzhou College, Huzhou 313000, China
2
School of Engineering, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Actuators 2026, 15(1), 20; https://doi.org/10.3390/act15010020
Submission received: 24 October 2025 / Revised: 2 December 2025 / Accepted: 12 December 2025 / Published: 31 December 2025
(This article belongs to the Section Actuators for Robotics)

Abstract

This paper proposes Social-only Particle Swarm Optimization-based Iterative Learning Control (SO-PSO-ILC) to address the limitations of conventional Iterative Learning Control (ILC) in model dependency and manual parameter tuning. The proposed method autonomously optimizes the learning gain using a social-only PSO variant. Comparative results on four distinct trajectories demonstrate superior performance: SO-PSO-ILC achieved a final RMSE of 0.0008 m in the linear path test and a precision 4.6 times higher than the baseline in the waveform path test. It also exhibits the fastest convergence rate, outperforming PSO-ILC in tracking accuracy and computational complexity while avoiding the convergence issues observed in WSA-ILC. The simulation results validate that swarm-optimized ILC provides a robust framework for repetitive tasks requiring high accuracy.

1. Introduction

Iterative Learning Control (ILC) is a powerful control strategy designed for systems performing repetitive tasks [1,2,3]. Unlike traditional control methods that rely solely on real-time adjustments, ILC exploits task repeatability to improve performance by using data from previous cycles to refine control inputs iteratively. This approach is particularly effective in applications requiring high precision, such as robotic assembly, semiconductor manufacturing, and robot-assisted medicine, where it can compensate for persistent disturbances or systematic errors. Recent advances in computational power have further enhanced ILC’s practicality, enabling its implementation in increasingly complex systems.
The key advantage of ILC lies in its ability to improve tracking performance through iterative learning. While Proportional-Integral-Derivative (PID) controllers remain popular for their simplicity and performance, they lack the ability to use past iterations information to improve accuracy in subsequent ones [4,5,6]. For instance, in robot-assisted medical procedures, PID controllers only respond to the immediate errors without utilizing historical data. In contrast, ILC achieves superior accuracy by continuously optimizing performance over successive iterations. This learning capability becomes particularly valuable in applications where system dynamics are partially unknown or too complex to model precisely. Furthermore, ILC’s data-driven nature allows it to adapt to subtle changes in system behavior that might otherwise go unnoticed.
However, ILC’s effectiveness depends critically on two factors: the accuracy of system modeling and appropriate selection of learning parameters. Modeling inaccuracies can lead to misguided adjustments, causing slow convergence or instability, while poorly tuned parameters may induce oscillations. Additionally, ILC assumes highly repeatable task conditions, which limits its effectiveness in variable environments like automated warehousing systems with fluctuating operational conditions. These limitations become particularly apparent when dealing with non-repetitive disturbances or when the reference trajectory changes significantly between iterations.
To overcome these limitations, researchers have integrated optimization algorithms with ILC frameworks [7,8,9]. For example, Particle Swarm Optimization (PSO) has proven particularly effective for optimizing ILC parameters like learning gain [10,11,12]. In manufacturing applications, PSO-ILC has demonstrated significant performance improvements in complex environments where traditional tuning methods fall short. Nevertheless, the computational complexity of standard PSO has motivated the development of simplified alternatives. The trade-off between computational efficiency and optimization performance remains an active area of research, especially for real-time applications with strict timing constraints.
Recent work has introduced computationally efficient variants such as Social-only PSO (SO-PSO) [13] and the Weightless Swarm Algorithm (WSA) [14,15,16]. These approaches reduce computational demands by eliminating certain parameters: SO-PSO removes cognitive components while WSA omits velocity calculations. Although these simplified algorithms maintain effective optimization capabilities for real-time applications, they exhibit reduced flexibility and slower convergence in high-dimensional or complex problem spaces compared to standard PSO. Experimental results have shown that these algorithms can achieve comparable performance to standard PSO in many practical scenarios while requiring significantly fewer computational resources. This makes them particularly attractive for embedded systems with limited processing power.
Building on these developments, this paper proposes a new hybrid approach, SO-PSO-ILC, to balance computational efficiency with control performance. We systematically compare its effectiveness against both PSO-ILC and WSA-ILC across various operational scenarios. The choice between these methods depends on specific application requirements including precision demands, computational constraints, and task complexity. Our comparative analysis considers not only tracking accuracy but also convergence speed and computational overhead, providing practical insights for control system designers. The results demonstrate that the proposed SO-PSO-ILC offers a favorable compromise between performance and efficiency in most test cases.
The remainder of this paper is organized as follows. Section 2 presents the modeling framework, including manipulator dynamics, system linearization, and our proposed control scheme. Section 3 evaluates and compares the performance of PSO-ILC, SO-PSO-ILC, and WSA-ILC controllers across different trajectory scenarios. Section 4 provides an in-depth analysis of the results, discusses the trade-offs between computational efficiency and tracking performance, and compares the proposed approach with other ILC strategies. Finally, Section 5 summarizes our key contributions and discusses potential improvements for future work, including directions for extending the proposed approach to more challenging control scenarios.

2. Methodology

This section presents the robotic manipulator control framework. We first establish the dynamic model, then develop the linearization approach, followed by the integrated control scheme. Finally, we derive convergence conditions through Lyapunov-based analysis.

2.1. Robotic System Modeling

The dynamic model serves as the foundation for controller design. Consider an n-DOF robotic manipulator governed by:
M ( q ( t ) ) q ¨ ( t ) + C ( q ( t ) , q ˙ ( t ) ) q ˙ ( t ) + G ( q ( t ) ) + D ( t ) = τ ( t ) ,
where t R + denotes time, q ( t ) R n represents the joint position vector, q ˙ ( t ) R n represents the joint velocity vector, q ¨ ( t ) R n represents joint acceleration vector, M ( q ( t ) ) R n × n is the inertia matrix, C ( q ( t ) , q ˙ ( t ) ) R n × n is Coriolis and centrifugal matrix, G ( q ( t ) ) R n is the total torque due to gravity, D ( t ) R n is the torque disturbance, τ ( t ) R n is the control input torque vector applied at the joints.
For iterative operations, the dynamics of the manipulator at the k-th iteration are described by:
M ( q k ( t ) ) q ¨ k ( t ) + C ( q k ( t ) , q ˙ k ( t ) ) q ˙ k ( t ) + G ( q k ( t ) ) + D ( t ) = τ k ( t ) ,
where the subscript k denotes the joint states and control torque input at the k-th iteration.
To enable iterative control, the dynamics are linearized around the desired trajectory q d ( t ) . A first-order Taylor expansion is performed for each term with respect to the deviations from the desired trajectory:
e k ( t ) = q d ( t ) q k ( t ) .
The resulting linearized dynamics are expressed as:
C d ( t ) e ¨ k ( t ) + E d ( t ) e ˙ k ( t ) + F d ( t ) e k ( t ) = τ k ( t ) S d ( t ) ,
where the following terms are defined:
C d ( t ) : = M ( q d ( t ) ) ,
E d ( t ) : = C q ˙ q d ( t ) , q ˙ d ( t ) ,
F d ( t ) : = M q q d ( t ) q ¨ d ( t ) + C q q d ( t ) , q ˙ d ( t ) q ˙ d ( t ) + G q q d ( t ) ,
S d ( t ) : = M ( q d ( t ) ) q ¨ d ( t ) + C ( q d ( t ) , q ˙ d ( t ) ) q ˙ d ( t ) + G ( q d ( t ) ) + D ( t ) .

2.2. Iterative Learning Control Design

The following assumptions ensure the linearized dynamics (4) are well-posed and guarantee convergence of the iterative learning scheme:
1.
The desired trajectory q d ( t ) is smooth, continuously differentiable, and achievable over the operating interval [ 0 , T ] .
2.
The signals q d ( t ) , q ˙ d ( t ) , q ¨ d ( t ) , and D ( t ) are uniformly bounded for all t [ 0 , T ] .
3.
The initial conditions are reset at each iteration such that q d ( 0 ) = q k ( 0 ) and q ˙ d ( 0 ) = q ˙ k ( 0 ) for all iterations k. This ensures alignment of resetting conditions and leads to e k ( 0 ) = 0 and e ˙ k ( 0 ) = 0 .
4.
The matrix C ˙ d ( t ) E d ( t ) is skew-symmetric.
Under Assumptions 1–4, the proposed controller architecture combines feedback correction and feedforward learning in a hybrid structure developed by [17]. As illustrated in Figure 1, the control input at each iteration k integrates two complementary components through:
τ k ( t ) = T k ( t ) feedback + H k ( t ) feedforward ,
where τ k ( t ) represents the total control torque, T k ( t ) serves as the feedback correction term for real-time error compensation, and H k ( t ) functions as the feedforward learning term that accumulates knowledge across iterations.
The feedback component generates immediate corrective actions based on current tracking errors according to:
T k ( t ) = K e k ( t ) + L e ˙ k ( t )
with K > 0 and L > 0 being symmetric positive definite gain matrices for position and velocity errors respectively. The feedback gains K and L are designed to be sufficiently large while maintaining a proportional relationship:
K = γ L , γ > 0
In parallel, the feedforward component evolves through an iterative learning process described by:
H k ( t ) = H k 1 ( t ) + α T k 1 ( t ) ,
where the learning gain α > 0 . This dual-mechanism approach provides multiple advantages: immediate disturbance rejection through feedback control, progressive performance improvement via feedforward learning, and comprehensive compensation for both transient and persistent disturbances.
Here outlines the convergence analysis using a Lyapunov function candidate, establishing the conditions under which the proposed ILC scheme guarantees error reduction over iterations.
Definition 1.
For a matrix A ( s ) that depends on a variable s, the maximum norm of A ( s ) over a domain [ a , b ] is defined as:
A m : = max s [ a , b ] A ( s ) 2 .
Definition 2.
For a symmetric matrix B R n × n , the minimum eigenvalue, denoted as λ min ( B ) , is defined as:
λ min ( B ) : = min { λ : λ   is   an   eigenvalue   of   B } .
Definition 3.
The Lyapunov function candidate V k ( t ) for the k-th iteration is defined as:
V k ( t ) : = 0 t z k T ( τ ) Λ z k ( τ ) d τ ,
where
z k ( t ) = e ˙ k ( t ) + γ e k ( t ) ,
and
Λ = α L .
Theorem 1
(Condition of Convergence). For all t [ 0 , T ] and k N , given the control input defined in Equation (9) and the learning law in Equation (12), the following properties hold:
V k + 1 ( t ) V k ( t ) ,
and
lim k e k ( t ) = lim k e ˙ k ( t ) = 0 .
These properties are satisfied if the controller fulfills the following conditions:
l a : = λ min ( 2 α ) L 2 γ C d m + C ˙ d m > 0 ,
l b : = λ min ( 2 α ) L 2 γ F d m > 0 ,
l a l b > 1 γ F d m .
The detailed proof of this theorem, based on Lyapunov stability analysis for hybrid ILC frameworks, follows the approach in [17].
Remark 1.
The learning gain α must be carefully chosen at each iteration to strictly satisfy the stability conditions (20)–(22). This ensures that the energy difference δ V k remains non-positive and that the tracking error e k ( t ) converges monotonically to zero.
Remark 2.
In practical implementation, it is essential to monitor the error norm e k ( t ) for oscillations. When instability is detected, α should be reduced while ensuring α > 0 to maintain learning capability.

2.3. Swarm Optimization Algorithms

PSO is an effective approach for solving complex optimization problems [18,19,20]. Building upon biological foundations, the standard PSO framework implements a systematic search process. In this formulation, a population of particles explores a D-dimensional search space, with each particle i maintaining two key vectors: its current position x i = [ x i 1 , x i 2 , , x i D ] and velocity v i = [ v i 1 , v i 2 , , v i D ] . These vectors are initialized randomly within predefined bounds that define the feasible region of the problem.
The algorithm iteratively updates each particle’s trajectory based on both personal experience and swarm intelligence. The velocity update equation combines three components:
v i ( m + 1 ) = w · v i ( m ) + c 1 · r 1 · ( p best , i x i ( m ) ) + c 2 · r 2 · ( g best x i ( m ) ) ,
where w represents the inertia weight that balances exploration and exploitation, while c 1 and c 2 are cognitive and social acceleration coefficients respectively. The terms r 1 and r 2 denote random numbers sampled uniformly from [ 0 , 1 ] , introducing stochasticity to the search process. p best , i tracks the best solution found by particle i, and g best represents the best solution discovered by the entire swarm.
Following the velocity update, each particle’s position is adjusted using:
x i ( m + 1 ) = x i ( m ) + v i ( m + 1 ) .
The Social-only PSO (SO-PSO) variant reduces computational complexity by removing the cognitive component. This simplification halves the number of terms in the velocity update equation compared to standard PSO, leading to fewer arithmetic operations and reduced memory requirements per iteration:
v i ( m + 1 ) = w · v i ( m ) + c 2 · r 2 · ( g best x i ( m ) ) .
This modified approach accelerates convergence by directing particles toward the current best solution without individual exploration. However, the reduced population diversity raises the susceptibility to a premature convergence in some cases. The SO-PSO algorithm exhibits particular effectiveness in optimization problems featuring well-defined global optima and relatively smooth search spaces.
Weightless Swarm Algorithm (WSA) is another variant which enhances computational efficiency through strategic simplifications. By eliminating the inertia weight w and cognitive coefficient c 1 , the WSA reduces both parameter tuning requirements and computational overhead. The modified velocity update equation becomes:
v i ( m + 1 ) = c 2 · r 2 · ( g best x i ( m ) ) .
Compared to PSO and SO-PSO, WSA adopts a more aggressive strategy. While this enables faster convergence and lower computational resource consumption on simple optimization problems (e.g., unimodal functions), its aggressive approach also makes it more prone to falling into local optima.

2.4. Hybrid SO-PSO-ILC Algorithm

The integration of swarm algorithms (PSO, SO-PSO, and WSA) with ILC enables autonomous adaptation of the learning gain α , eliminating manual parameter tuning. These hybrid approaches (PSO-ILC, SO-PSO-ILC, and WSA-ILC) iteratively optimize α to minimize tracking error while preserving stability, consistent with Lyapunov-based conditions.
The optimization process begins by initializing a swarm of particles, where each particle represents a candidate value α i for the learning gain, randomly generated within predefined bounds [ α min , α max ] . For each candidate α i , the ILC system is simulated, and the resulting tracking error e i ( t ) over the task duration T is evaluated. The learning gain that achieves the minimum tracking error is selected.
The detailed procedure of the SO-PSO-ILC algorithm is summarized in Algorithm 1. The key innovation is the embedding of social-only PSO optimization within the ILC loop, where at each iteration, the learning gain α is autonomously optimized to minimize tracking error before updating the feedforward learning term.
Algorithm 1 SO-PSO-ILC
Require: Desired trajectory q d ( t ) ; Feedback gains K , L ; Max iterations k m a x
Ensure: Optimized control input τ k m a x ( t )
    1:
Initialize k = 0 , H 0 ( t ) = 0
    2:
while  k < k m a x  and not converged do
    3:
    k k + 1
    4:
   Execute Trial k:
    5:
     Apply τ k ( t ) = K e k ( t ) + L e ˙ k ( t ) + H k ( t )
    6:
     Measure q k ( t ) , compute error e k ( t ) and RMSE J k
    7:
   SO-PSO Optimization:
    8:
     Initialize swarm of candidate learning gains α i
    9:
     while swarm not converged do
  10:
      Update velocities: v i w · v i + c 2 · r 2 · ( α g b e s t α i )
  11:
      Update positions: α i α i + v i
  12:
      Evaluate J k for each α i , update α g b e s t
  13:
     end while
  14:
     Set optimal gain α k * α g b e s t
  15:
   Update Learning Law:
  16:
      H k + 1 ( t ) H k ( t ) + α k * · ( K e k ( t ) + L e ˙ k ( t ) )
  17:
end while
  18:
return  τ k ( t )

3. Simulation and Results

This section presents the simulation results comparing the performance of various ILC approaches for robotic trajectory tracking. The following subsections detail our simulation setup and analyze the comparative performance across different trajectory types.

3.1. Experimental Setup

The control strategies were validated through numerical simulations of a 7-DOF WAM Arm robotic manipulator implemented in Simscape, with all algorithms developed in MATLAB R2025a. The physical structure of the WAM Arm (the prototype for the simulation model) is shown in Figure 2 [21].
Three control approaches were systematically compared: PSO-ILC, SO-PSO-ILC, and WSA-ILC. All algorithms shared identical baseline parameters, including feedback controller gains ( K = 50 I 7 , L = 30 I 7 ) and a maximum of 20 ILC iterations. Each algorithm used a population size of 2 and 3 swarm iterations per ILC update. Other parameter configurations for each algorithm are summarized in Table 1.
Controller performance was quantified using the Root Mean Square Error (RMSE) in meter over the trajectory duration. The RMSE is defined as:
R M S E = 1 T 0 T e i 2 ( t ) d t .
To ensure statistical reliability, given the stochastic nature of swarm optimization, each simulation was repeated 10 times with different random-number seeds. The evaluation considered four trajectory types: linear path, waveform path, circular path, and spiral path, as shown in Figure 3.

3.2. Performance Comparison

The comparative performance of the three algorithms across four trajectory types is systematically analyzed below, with results supported by numerical data, convergence curves, and characteristic summaries.
The consolidated numerical results of final tracking accuracy are presented in Table 2, and the convergence processes over 20 iterations are visualized in Figure 4. As clearly shown, the proposed SO-PSO-ILC algorithm achieves the lowest RMSE across all four trajectory types, demonstrating its consistent superiority.
Through comparative testing of three algorithms across four trajectory types, we observed significant performance differences, as shown in Figure 4. In linear path testing, all algorithms started with an initial error of 0.388 m. SO-PSO-ILC demonstrated optimal convergence characteristics, reaching 0.150 m error by the second iteration, outperforming PSO’s 0.154 m and WSA-ILC’s 0.249 m. After 20 iterations, SO-PSO-ILC maintained its lead with a final error of 0.000835 m, showing a 78.5% improvement over WSA-ILC and a slight advantage over PSO-ILC. In waveform path testing, a similar algorithmic performance ranking was observed. SO-PSO-ILC achieved an advantage in the second iteration with a 0.078 m error, ultimately reaching a precision 4.6 times better than WSA-ILC. In circular path testing, PSO-ILC and SO-PSO-ILC showed comparable performance, while WSA-ILC exhibited noticeable stagnation between the 5th and 10th iterations. In spiral path testing, although PSO-ILC temporarily led at the 6th iteration, SO-PSO-ILC ultimately surpassed other algorithms with 0.000935 m precision.

4. Discussion

The parameters for the swarm algorithms (e.g., c 1 = 2 , c 2 = 2 , w = 0.5 ) were selected based on standard values commonly employed in the literature [22,23,24]. This specific combination is widely adopted as a robust and well-performing default for a wide range of applications. A small population size was used to prioritize computational efficiency in this proof-of-concept study, a choice supported by findings in [10] indicating that the performance of swarm-based ILC is not significantly influenced by increases in population size.
The performance differences observed in Section 3 can be attributed to the inherent characteristics of each swarm optimization strategy. The comprehensive analysis reveals that SO-PSO-ILC exhibits dual advantages in both convergence speed and final accuracy across multiple trajectory types. The proposed SO-PSO-ILC achieves the best overall performance with moderate computational resource consumption, optimally balancing computational efficiency and tracking accuracy. Its hybrid optimization strategy enhances performance without imposing an excessive computational burden. Although the absolute simulation time difference is minimal on a desktop PC, SO-PSO-ILC’s primary advantage lies in its simplified algorithmic structure, which inherently requires fewer computational operations per iteration than a standard PSO-based approach. This makes it a more suitable candidate for applications where computational throughput is a critical constraint.
The proposed SO-PSO-ILC can be viewed as a form of data-driven ILC [25,26,27]. Its operation is primarily governed by the iterative processing of input-output data to optimize performance, thereby significantly reducing the dependency on an accurate system model. For a broader perspective, Table 3 provides a qualitative comparison of the proposed SO-PSO-ILC with other prominent ILC strategies, including conventional (model-based) ILC, adaptive ILC [7,28,29], and model predictive ILC [30,31]. The key advantage of our approach lies in its autonomous nature, which eliminates the need for manual parameter tuning or a precise system model, while maintaining a favorable computational profile.
While this study presents a simulation-based validation, the proposed SO-PSO-ILC scheme possesses features relevant for real-world application. Its hybrid feedback-feedforward structure offers inherent robustness: the feedback component provides real-time correction against non-repetitive disturbances such as sensor noise and model uncertainties within each iteration, while the ILC learns to cancel repetitive errors. This structure is particularly advantageous for dealing with real-world imperfections and initial modeling inaccuracies.

5. Conclusions

This study demonstrates that the proposed SO-PSO-ILC algorithm achieves superior precision while maintaining computational efficiency. It outperforms both PSO-ILC and WSA-ILC across all tested trajectories. The key advantage lies in its autonomy, as the integration of swarm intelligence eliminates the need for manual tuning of learning gains, which represents a significant hurdle in traditional ILC implementation. This characteristic makes the algorithm particularly suitable for systems where accurate models are difficult to obtain.
A notable limitation is the simulation-only validation, which does not account for real-world constraints such as actuator saturation and measurement noise. Future work will focus on experimental validation under dynamic operating conditions.
Overall, the integration of swarm intelligence with ILC frameworks presents a promising direction for developing autonomous, self-optimizing robotic control systems in precision applications.

Author Contributions

Conceptualization, Y.D.; methodology, Y.D.; software, Y.D.; validation, Y.D.; formal analysis, Y.D.; investigation, Y.D.; resources, Y.D.; data curation, Y.D.; writing—original draft preparation, Y.D.; writing—review and editing, Y.D. and E.P.; visualization, Y.D.; supervision, E.P.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting this study are available upon request from the first author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Owens, D.H.; Hätönen, J. Iterative learning control–An optimization paradigm. Annu. Rev. Control 2005, 29, 57–70. [Google Scholar] [CrossRef]
  2. Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A survey of iterative learning control. IEEE Control Syst. Mag. 2006, 26, 96–114. [Google Scholar] [CrossRef]
  3. Ahn, H.S.; Chen, Y.; Moore, K.L. Iterative learning control: Brief survey and categorization. IEEE Trans. Syst. Man, Cybern. Part C (Appl. Rev.) 2007, 37, 1099–1121. [Google Scholar] [CrossRef]
  4. Borase, R.P.; Maghade, D.; Sondkar, S.; Pawar, S. A review of PID control, tuning methods and applications. Int. J. Dyn. Control 2021, 9, 818–827. [Google Scholar] [CrossRef]
  5. Han, J. From PID to Active Disturbance Rejection Control. IEEE Trans. Ind. Electron. 2009, 56, 900–906. [Google Scholar] [CrossRef]
  6. Ang, K.H.; Chong, G.; Li, Y. PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 2005, 13, 559–576. [Google Scholar] [CrossRef]
  7. Dou, Y.; Prempain, E.; Su, L. Adaptive Iterative Learning Control for Robotic Manipulators. In Proceedings of the 2024 UKACC 14th International Conference on Control (CONTROL), Winchester, UK, 10–12 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 96–97. [Google Scholar] [CrossRef]
  8. Dou, Y.; Prempain, E. Precision-Enhanced Trajectory Tracking in Robotic Arms Using Elite Genetic Algorithm and Iterative Learning Control. In Proceedings of the 2024 SICE Festival with Annual Conference (SICE FES), Kochi City, Japan, 27–30 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 179–184. [Google Scholar]
  9. Dou, Y. Advanced Iterative Learning Control Strategies in Robotic Manipulators. Doctoral Thesis, University of Leicester, Leicester, UK, 2025. [Google Scholar] [CrossRef]
  10. Dou, Y.; Prempain, E.; Ting, T.O. Enhancing Robotic Arm Trajectory Tracking via Hybrid Weightless Swarm Algorithm and Iterative Learning Control (WSA-ILC). In Proceedings of the 2024 10th International Conference on Control, Decision and Information Technologies (CoDIT), Vallette, Malta, 1–4 July 2024. [Google Scholar] [CrossRef]
  11. Gu, Q.; Hao, X. Adaptive iterative learning control based on particle swarm optimization. J. Supercomput. 2020, 76, 3615–3622. [Google Scholar] [CrossRef]
  12. Wang, Y.; Chien, C.J.; Chuang, C. An Output Based Adaptive Iterative Learning Control with Particle Swarm Optimization for Robotic Systems. Appl. Mech. Mater. 2013, 479–480, 737–741. [Google Scholar] [CrossRef]
  13. Lihu, A.; Holban, S. A study on the minimal number of particles for a simplified particle swarm optimization algorithm. In Proceedings of the 2011 6th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 19–21 May 2011; pp. 299–303. [Google Scholar] [CrossRef]
  14. Dou, Y.; Ting, T.O. Performance of Weightless Swarm Algorithm on Numerical Benchmark Functions. In Engineering Applications of AI and Swarm Intelligence; Springer: Singapore, 2024; pp. 323–342. [Google Scholar] [CrossRef]
  15. Ting, T.; Man, K.L.; Guan, S.U.; Seon, J.; Jeong, T.T.; Wong, P.W. Maximum power point tracking (MPPT) via weightless swarm algorithm (WSA) on cloudy days. In Proceedings of the 2012 IEEE Asia Pacific Conference on Circuits and Systems, Kaohsiung, Taiwan, 2–5 December 2012. [Google Scholar] [CrossRef]
  16. Ting, T.; Man, K.L.; Guan, S.U.; Nayel, M.; Wan, K. Weightless swarm algorithm (wsa) for dynamic optimization problems. In Etwork and Parallel Computing; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  17. Kuc, T.Y.; Nam, K.; Lee, J.S. An iterative learning control of robot manipulators. IEEE Trans. Robot. Autom. 1991, 7, 835–842. [Google Scholar] [CrossRef]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  19. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95—Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar] [CrossRef]
  20. Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  21. WAM Arm. Barrett Technology. 2025. Available online: https://barrett.com/wam (accessed on 11 December 2025).
  22. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  23. Ali, M.; Kaelo, P. Improved particle swarm algorithms for global optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  24. Zhang, X.; Lin, Q. Three-learning strategy particle swarm algorithm for global optimization problems. Inf. Sci. 2022, 593, 289–313. [Google Scholar] [CrossRef]
  25. Meindl, M.; Bachhuber, S.; Seel, T. Iterative Model Learning and Dual Iterative Learning Control: A Unified Framework for Data-Driven Iterative Learning Control. IEEE Trans. Autom. Control 2025, 70, 7818–7829. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Zou, Q. Data-driven robust iterative learning control of linear systems. Automatica 2024, 164, 111646. [Google Scholar] [CrossRef]
  27. Lee, Y.H.; Rai, S.; Tsao, T.C. Data-Driven Iterative Learning Control of Nonlinear Systems by Adaptive Model Matching. IEEE/ASME Trans. Mechatronics 2022, 27, 5626–5636. [Google Scholar] [CrossRef]
  28. Chi, R.; Hou, Z.; Xu, J. Adaptive ILC for a class of discrete-time systems with iteration-varying trajectory and random initial condition. Automatica 2008, 44, 2207–2213. [Google Scholar] [CrossRef]
  29. Freeman, C.T.; Meadmore, K.L.; Hughes, A.M.; Grabham, N.; Tudor, J.; Yang, K. Multiple Model Adaptive ILC for Human Movement Assistance. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 2405–2410. [Google Scholar] [CrossRef]
  30. Han, H.G.; Wang, C.Y.; Sun, H.Y.; Yang, H.Y.; Qiao, J.F. Iterative Learning Model Predictive Control With Fuzzy Neural Network for Nonlinear Systems. IEEE Trans. Fuzzy Syst. 2023, 31, 3220–3234. [Google Scholar] [CrossRef]
  31. Oh, S.K.; Park, B.J.; Lee, J.M. Point-to-point iterative learning model predictive control. Automatica 2018, 89, 135–143. [Google Scholar] [CrossRef]
Figure 1. Integrated control architecture combining feedback path for real-time correction and feedforward path for learned compensation.
Figure 1. Integrated control architecture combining feedback path for real-time correction and feedforward path for learned compensation.
Actuators 15 00020 g001
Figure 2. 7-DOF WAM Arm (Barrett Technology, Newton, MA, USA).
Figure 2. 7-DOF WAM Arm (Barrett Technology, Newton, MA, USA).
Actuators 15 00020 g002
Figure 3. The four trajectory types used in the simulation.
Figure 3. The four trajectory types used in the simulation.
Actuators 15 00020 g003
Figure 4. RMSE comparison of three algorithms over 20 iterations for four trajectory types.
Figure 4. RMSE comparison of three algorithms over 20 iterations for four trajectory types.
Actuators 15 00020 g004
Table 1. Algorithm Parameter Configurations.
Table 1. Algorithm Parameter Configurations.
AlgorithmCognitive ( c 1 )Social ( c 2 )Inertia (w)
PSO-ILC220.5
SO-PSO-ILC20.5
WSA-ILC2
Table 2. Final RMSE (m) comparison after 20 iterations across different trajectories.
Table 2. Final RMSE (m) comparison after 20 iterations across different trajectories.
Trajectory TypePSO-ILCWSA-ILCSO-PSO-ILC
Linear0.0010060.0037500.000835
Waveform0.0012580.0050330.001099
Circular0.0015820.0085110.001877
Spiral0.0010450.0050630.000935
Table 3. Comprehensive Comparison of ILC Algorithms.
Table 3. Comprehensive Comparison of ILC Algorithms.
MethodModel NeedComputational CostTuning Need
SO-PSO-ILCLowModerateLow
Conventional ILCHighLowHigh
Adaptive ILCModerateModerateHigh
Model Predictive ILCHighVery HighHigh
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dou, Y.; Prempain, E. SO-PSO-ILC: An Innovative Hybrid Algorithm for Precise Robotic Arm Trajectory Tracking. Actuators 2026, 15, 20. https://doi.org/10.3390/act15010020

AMA Style

Dou Y, Prempain E. SO-PSO-ILC: An Innovative Hybrid Algorithm for Precise Robotic Arm Trajectory Tracking. Actuators. 2026; 15(1):20. https://doi.org/10.3390/act15010020

Chicago/Turabian Style

Dou, Yu, and Emmanuel Prempain. 2026. "SO-PSO-ILC: An Innovative Hybrid Algorithm for Precise Robotic Arm Trajectory Tracking" Actuators 15, no. 1: 20. https://doi.org/10.3390/act15010020

APA Style

Dou, Y., & Prempain, E. (2026). SO-PSO-ILC: An Innovative Hybrid Algorithm for Precise Robotic Arm Trajectory Tracking. Actuators, 15(1), 20. https://doi.org/10.3390/act15010020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop