Next Article in Journal
Improving Sewer Damage Inspection: Development of a Deep Learning Integration Concept for a Multi-Sensor System
Next Article in Special Issue
Variable-Parameter Impedance Control of Manipulator Based on RBFNN and Gradient Descent
Previous Article in Journal
A Legendre Neural Network-Based Approach to Multiparameter Identification of Traffic Loads Across the Full Spatiotemporal Domain
Previous Article in Special Issue
A Real-Time Semantic Map Production System for Indoor Robot Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Green ILC: A Novel Energy-Efficient Iterative Learning Control Approach

School of Engineering, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(23), 7787; https://doi.org/10.3390/s24237787
Submission received: 24 October 2024 / Revised: 25 November 2024 / Accepted: 4 December 2024 / Published: 5 December 2024

Abstract

:
In this paper, we introduce Green Iterative Learning Control (Green ILC), an innovative hybrid control method that addresses the critical need for energy-efficient control in dynamic, repetitive-task environments. By integrating the iterative refinement capabilities of traditional Iterative Learning Control (ILC) with the optimization strengths of gradient descent, Green ILC achieves a balanced trade-off between tracking accuracy and energy consumption. This novel approach introduces a cost function that minimizes both tracking errors and control effort, enabling the system to adaptively optimize performance over iterations. Theoretical analysis and simulation results demonstrate that Green ILC not only achieves faster convergence but also provides significant energy savings compared with traditional ILC methods. Notably, Green ILC reduces energy consumption by prioritizing efficiency, making it particularly suitable for energy-intensive applications such as robotics, manufacturing, and process control. While a slight decrease in tracking accuracy is observed, this trade-off is acceptable for scenarios where energy efficiency is paramount. This work establishes Green ILC as a promising solution for modern industrial systems requiring robust and sustainable control strategies.

1. Introduction

In the field of modern control engineering, with the rapid development of industrial automation, robotics, and smart manufacturing, control systems are facing increasingly complex challenges [1]. These systems not only need to achieve high precision and performance in complicated operational environments, but they must also possess strong adaptability and robustness to handle constantly changing external conditions and various uncertainties. They must operate seamlessly in environments that are both complex and subject to frequent and unpredictable changes. High precision and performance are crucial, especially in applications such as aerospace, automotive manufacturing, and medical robotics, where even the slightest deviation can have significant consequences. However, achieving these goals is only part of the challenge. Modern control systems must also possess strong adaptability and robustness, which are essential to maintaining optimal performance in the face of fluctuating environmental conditions, unknown disturbances, and system uncertainties.
This adaptability is particularly important in dynamic environments, where variables such as load, temperature, and external forces may change unpredictably. Robustness ensures that control systems can continue to operate effectively in the presence of noise, sensor errors, or inaccuracies in modeling. Although traditional control methods, such as PID controllers, are widely used due to their simplicity and effectiveness in linear, time-invariant systems, they often struggle to meet the demands of more advanced applications that require real-time adjustments and learning capabilities [2]. Another widely used control method is Linear Quadratic Regulator (LQR), which provides optimal control strategies by balancing system performance and control energy. LQR achieves this by minimizing a quadratic cost function that penalizes state errors and control effort [3,4,5]. While LQR is highly effective for linear systems with well-defined models, it has limitations when applied to more complex, nonlinear, or time-varying systems. Its assumption of linearity limits its use in many real-world situations where system dynamics can change unpredictably. These conventional methods rely on fixed parameters, which can limit their effectiveness in situations where adaptability and continuous improvement are needed. In highly complex systems, traditional controllers may struggle to handle nonlinearity, time-varying dynamics, or external disturbances, resulting in performance degradation and operational inefficiency.
More advanced control strategies with learning abilities have become essential to overcoming these limitations [6,7,8]. Among these approaches, Iterative Learning Control (ILC) has gained considerable attention due to its ability to improve performance over time by using data from previous cycles. This makes ILC well suited for systems that perform repetitive tasks, such as robotic arms, automated manufacturing processes, or any system that repeats operations in each cycle [9,10,11,12]. By adjusting control actions based on past results, ILC gradually reduces tracking errors, improving the system’s accuracy and efficiency. However, despite its advantages, the performance of traditional ILC can decline significantly in dynamic or unpredictable environments, as it typically relies on a fixed system model that may not account for changes or disturbances.
The study by Cao et al. introduced modified P-type indirect ILC designed to handle nonlinear systems with mismatched uncertainties and matched disturbances [13]. The method employs a learning-based disturbance estimator to transform the system into an input-to-state stable form, enabling compensation for dynamic mismatched uncertainties. Additionally, modified P-type ILC is implemented to enhance tracking performance for non-repetitive trajectories. Experimental results demonstrate the effectiveness of this approach, with tracking errors being significantly reduced in complex systems such as continuous stirred tank reactors and flight simulators.
In addition to ILC, gradient descent is another well-known optimization method widely used to fine-tune system parameters by minimizing a cost function [14]. This cost function typically includes essential factors such as output error, control energy, and system stability. Gradient descent is commonly used across various fields due to its general applicability. However, in control systems, gradient descent is typically applied non-iteratively, focusing solely on optimizing current parameters without fully utilizing feedback from previous cycles. This lack of iterative learning limits gradient descent’s effectiveness in dynamic environments where continuous adjustments are necessary. While gradient descent works well for optimizing single objectives in relatively stable settings, it becomes less effective in control systems that require adaptation and iterative refinement.
Although ILC and gradient descent differ in many ways, they share similar strategies in updating control inputs and system parameters iteratively to improve performance. This similarity has led researchers to explore combining the strengths of both ILC and gradient descent to create hybrid control methods that address the limitations of each approach. By merging the iterative learning features of ILC with the optimization properties of gradient descent, these hybrid strategies can improve the adaptability and robustness of control systems, especially in environments that experience frequent changes or disturbances.
Sago and Adachi presented a gradient-based ILC algorithm for linear discrete-time systems which addresses the limitations of existing ILC schemes. It provides convergence conditions based on system uncertainties, ensuring exponential decay of input error and improving applicability in uncertain systems [15]. Owens et al. introduced a robust gradient-based ILC algorithm and provided conditions for monotonic convergence in the presence of model uncertainties. This approach ensures that tracking errors decrease consistently, making it suitable for dynamic environments. Their work addresses the practical challenges of modeling errors, contributing to the development of more reliable and adaptable control systems [16]. Chu et al. proposed a predictive gradient-based ILC algorithm that extends traditional ILC by incorporating predicted future trials in addition to past trial data. This approach significantly improves convergence speed and robustness to model uncertainties, offering better predictive control capabilities [17]. Huo et al. presented a model-free gradient-based ILC algorithm for nonlinear systems, eliminating the need for system model identification. The algorithm achieves monotonic convergence to zero error and matches the performance of model-based ILC in stroke rehabilitation, with better tracking than linear model-free ILC [18]. He and Pu proposed a novel gradient-based ILC algorithm to enhance proportional-type traditional ILC by addressing its parameter sensitivity. Their algorithm mimics the gradient descent process, ensuring reliable convergence and reducing parameter dependence, making it more robust than traditional ILC [19]. Developing these hybrid methods offers a promising approach to improving system performance in complex, real-world applications.
Different from the common control objective, which focuses on minimizing the error between actual and desired outputs, this paper also considers the effect of input energy consumption. Drawing inspiration from LQR’s approach to optimizing performance, we aim to extend these principles by integrating the iterative learning capabilities of ILC with the optimization strengths of gradient descent.
This paper introduces Green ILC, a novel hybrid control strategy that combines the characteristics of ILC with the optimization strengths of gradient descent. Green ILC is designed to balance tracking accuracy and energy use through a cost function that minimizes both tracking error and control effort. By using gradient-based updates, Green ILC iteratively improves control inputs over time while optimizing energy consumption. This makes Green ILC particularly useful in applications where energy efficiency is critical, such as robotics, manufacturing, and industrial process control. By combining iterative learning and optimization, Green ILC offers a more adaptable and efficient solution for control systems in dynamic environments, significantly reducing energy consumption while maintaining satisfactory tracking performance.
The rest of this paper is organized as follows: Section 2 explains the theoretical background and mathematical development of Green ILC. Section 3 presents the simulation results, demonstrating its performance and comparing it with traditional ILC. Section 4 discusses Green ILC’s performance in detail, and Section 5 concludes the paper by summarizing the key contributions of this research.

2. Method

This section derives the Green ILC updating rule and provides its convergence proof.

2.1. Green Iterative Learning Control

Consider the discrete-time state-space model described by the following equations:
x j ( k + 1 ) = A x j ( k ) + B u j ( k ) + w ( k ) , y j ( k ) = C x j ( k ) ,
where j { 0 , 1 , 2 , } is the iteration index and k { 0 , 1 , , N 1 } represents the time step within each iteration. Here, x j ( k ) R n is the state vector, u j ( k ) R m is the control input vector, and y j ( k ) R p is the output vector. The disturbance vector w ( k ) R n is assumed to be iteration-invariant. The system matrices are given by A R n × n , B R n × m , and C R p × n .
The control objective is to find a sequence of control inputs { u j ( k ) } that minimizes the following cost function over iterations:
J ( u j ) = k = 0 N 1 e j ( k + 1 ) Q e j ( k + 1 ) + u j ( k ) R u j ( k ) ,
where e j ( k + 1 ) = y r ( k + 1 ) y j ( k + 1 ) denotes the tracking error at time step k + 1 , y r ( k + 1 ) is the desired reference output, and Q R p × p and R R m × m are positive definite weighting matrices ( Q > 0 and R > 0).
We substitute the dynamics of the output y j ( k + 1 ) from the system equations
y j ( k + 1 ) = C A x j ( k ) + B u j ( k ) + w ( k ) = C A x j ( k ) + C B u j ( k ) + C w ( k ) ,
which leads to the expression for the error term:
e j ( k + 1 ) = y r ( k + 1 ) C A x j ( k ) C B u j ( k ) C w ( k ) .
To minimize the cost function J ( u j ) by using gradient-based methods, we compute the gradient of J for u j ( k ) . We define
J 1 = e j ( k + 1 ) Q e j ( k + 1 ) , J 2 = u j ( k ) R u j ( k ) .
The gradient of J 1 with respect to u j ( k ) is given by the chain rule (see Theorem A1 in Appendix A):
u j ( k ) J 1 = e j ( k + 1 ) u j ( k ) 2 Q e j ( k + 1 ) = C B 2 Q e j ( k + 1 ) = 2 B C Q e j ( k + 1 ) .
The gradient of J 2 is
u j ( k ) J 2 = 2 R u j ( k ) .
Thus, the gradient of the cost function J is
u j ( k ) J = 2 B C Q e j ( k + 1 ) + 2 R u j ( k ) .
By using the gradient descent method (see Theorem A2 in Appendix A) to update the control input, the update rule becomes
u j + 1 ( k ) = u j ( k ) α u j ( k ) J = u j ( k ) α 2 B C Q e j ( k + 1 ) + 2 R u j ( k ) = I 2 α R u j ( k ) + 2 α B C Q e j ( k + 1 ) ,
where α is the step size and I is the identity matrix of appropriate dimensions. This update rule shows that each iteration refines the control input to reduce the tracking error, reflecting the iterative learning nature of ILC.

2.2. Convergence Proof

To establish the convergence of the Green ILC algorithm, we analyze whether the control inputs u j ( k ) converge to an optimal solution u * ( k ) . For this proof, the following assumptions are made:
  • The disturbance w ( k ) is iteration-invariant.
  • The initial state x j ( 0 ) = x 0 is consistent across all iterations.
  • The system is controllable and observable.
  • The matrix A is stable or can be stabilized.
To examine how the cost function J ( u j ) evolves, we use a second-order Taylor series expansion (see Theorem A3 in Appendix A) around u j ( k ) :
J ( u j + 1 ) J ( u j ) + u j J ( u j + 1 u j ) + 1 2 ( u j + 1 u j ) u j 2 J ( u j + 1 u j ) .
Substituting the gradient descent update rule u j + 1 u j = α u j J into this expansion yields
J ( u j + 1 ) J ( u j ) α u j J u j J + α 2 2 u j J u j 2 J u j J .
The change in the cost function, Δ J , is
Δ J = J ( u j + 1 ) J ( u j ) α u j J 2 + α 2 2 u j J u j 2 J u j J .
By assuming the maximum eigenvalue L of the Hessian u j 2 J , the second-order term can be bounded by the Rayleigh Quotient inequality (see Theorem A4 in Appendix A):
u j J u j 2 J u j J L u j J 2 .
We substitute this into the expression for Δ J :
Δ J α α 2 L 2 u j J 2 .
For a guaranteed decrease in the cost function, the following condition must hold:
0 < α < 2 L .
This condition ensures that the update rule results in a monotonic decrease in the cost function J ( u j ) , leading to the convergence of the control inputs u j ( k ) toward the optimal solution u * ( k ) .

3. Results

To demonstrate the effectiveness of the proposed Green ILC method, we test it on three numerical cases.

3.1. Case 1: Sinusoidal Trajectory

First, we consider a discrete-time system characterized by the following state-space matrices:
A = 0.1 0.7 0 0.5 , B = 1 0 , C = 1 0 .
The control objective is to track a reference trajectory while minimizing the cost function defined in Equation (2). The reference trajectory is defined as
y r ( k ) = sin 2 π k N ,
for N = 50 time steps. The external disturbance is defined as
w ( k ) = 0.05 sin k 10 .
The Green ILC method is applied over 30 iterations, with a gradient descent step size of α = 0.025 . The control input is updated iteratively according to the rule in Equation (9). The weighting scalars are set to Q = 20 and R = 1 . For comparison, traditional ILC uses learning gains of 0.1, 0.5, and 1.
Figure 1a compares the desired trajectory y r and the actual trajectories y at the 2nd, 5th, and 30th iterations. The results show near-perfect tracking with minimal errors at the 30th iteration, demonstrating Green ILC’s ability to optimize both tracking accuracy and control energy. Although Green ILC may result in slightly larger tracking errors than traditional ILC, the difference is minor when high precision is not critical. This trade-off highlights Green ILC’s capacity to balance accuracy with energy efficiency, making it ideal for applications where energy savings are prioritized over extreme precision.
Figure 1b shows the tracking error | e | at the 2nd, 5th, and 30th iterations, with a clear reduction in error across iterations. The tracking error | e | remains almost unchanged after the fifth iteration, indicating that the system has reached equilibrium.
Figure 1c compares the control input u at the 2nd, 5th, and 30th iterations, revealing that the inputs are very similar, indicating the system output’s high sensitivity to input variations. This means that tiny variations in the input might lead to large differences in the output.
Figure 1d displays the total cost J, output error cost J 1 , and control input cost J 2 over iterations. After approximately five iterations, all three costs reach equilibrium, indicating that the system has effectively balanced tracking accuracy with control input energy, thereby validating the convergence of the Green ILC method.
Figure 1e shows the total cost J for both Green ILC and traditional ILC over iterations. Green ILC stabilizes after 4 iterations, whereas traditional ILC requires 25, 5, or 4 iterations for different learning gains. At equilibrium, the total cost J for Green ILC (15.88 units) is lower than for traditional ILC (16.01, 16.52, or 16.52 units).
Figure 1f illustrates the control input energy consumption over iterations for both traditional ILC and Green ILC. Since Green ILC explicitly considers input energy in its optimization, it results in lower energy consumption in the steady state. After reaching equilibrium, Green ILC requires only 15.13 units of energy, compared with 15.57, 16.52, and 16.52 units for traditional ILC. This reduction in control effort underscores Green ILC’s ability to maintain desired performance while minimizing energy consumption.

3.2. Case 2: Random Trajectory

Second, we consider another discrete-time system with the following state-space matrices:
A = 0.9 0.3 0 0.7 , B = 1 0 , C = 1 0 .
To increase the difficulty of the tracking task, a random signal is used as the reference trajectory, serving as a robust test for the proposed method. Additionally, random noise, constant at each iteration, is added to evaluate the learning capability of Green ILC.
The Green ILC method is applied over 60 iterations, with a gradient descent step size of α = 0.015 . The control input is updated iteratively according to the rule in Equation (9). The weighting scalars are set to Q = 5 and R = 1 . For comparison, traditional ILC uses learning gains of 0.1, 0.2, and 0.25.
Figure 2a–d illustrate a similar phenomenon as observed in Case 1. Figure 2e demonstrates that Green ILC stabilizes after 30 iterations, whereas traditional ILC achieves stabilization in only 20, 20, or 20 iterations. However, at equilibrium, the total cost J for Green ILC (6.53 units) is lower than that for traditional ILC (8.35, 8.46, or 8.46 units), underscoring the superior optimization capabilities of Green ILC. Figure 2f shows that after reaching equilibrium, Green ILC requires only 5.4 units of energy, in contrast to the 7.70, 7.83, or 7.84 units consumed by traditional ILC. The more pronounced energy reduction in Case 2 compared with Case 1 is attributed to the decrease in the error weight Q.

3.3. Case 3: Complex Trajectory

Thirdly, we consider a discrete-time system characterized by the following 3 × 3 state-space matrices:
A = 0.1 0.7 0 0 0.5 0 0 0 0.5 , B = 1 0 1 , C = 1 0 1 .
The reference trajectory is defined as
y r ( k ) = sin 2 π k N + sin 10 π k N ,
for N = 50 time steps. The external disturbance is defined as in Equation (18). The Green ILC method is applied over 60 iterations, with a gradient descent step size of α = 0.02 . The control input is updated iteratively according to the rule in Equation (9). The weighting scalars are set to Q = 5 and R = 1 . For comparison, traditional ILC uses learning gains of 0.1, 0.5, and 0.6.
Figure 3a–d show similar performance to the previous cases. Figure 3e shows that Green ILC stabilizes after 30 iterations, whereas traditional ILC requires the same as in the previous cases. At equilibrium, the total cost J for Green ILC (5.49 units) is lower than that for traditional ILC (5.62, 5.62, or 5.62 units). Figure 3f illustrates that after reaching equilibrium, Green ILC requires only 5.23 units of energy, compared with 5.62, 5.62, or 5.62 units used by traditional ILC.

4. Discussion

Green ILC combines the strengths of gradient descent optimization and the ILC strategy, leading to significant reductions in energy consumption while maintaining good system performance. However, compared with traditional ILC, Green ILC may show a slight decrease in tracking accuracy. This method optimizes control inputs and tracking errors, prioritizing energy efficiency over precision. As a result, it may not be the best choice for applications where high precision is crucial, but it is highly effective in cases where managing energy use is the main concern.
The main advantage of Green ILC is its ability to significantly reduce control energy consumption, especially in energy-intensive systems. By balancing tracking accuracy with control inputs, Green ILC helps avoid excessive energy usage, which can lead to actuator wear or failure. In many repetitive-task applications, such as robotics, automated manufacturing, and process control, Green ILC provides acceptable tracking accuracy while saving energy by reducing control effort. Additionally, Green ILC can handle certain unknown disturbances that traditional LQR methods cannot manage, increasing its robustness and expanding its range of practical applications.
In Green ILC, the weighting matrices Q and R are pivotal in balancing tracking errors and control energy. By dynamically adjusting these matrices based on system states and performance requirements, Green ILC achieves adaptive optimization of control strategies, thereby enhancing system robustness and reliability. For instance, in robotics, Green ILC can modify the weighting matrices to meet specific task demands, such as precise assembly or high-speed material handling. In manufacturing, it optimizes energy consumption and efficiency across production lines, accommodating various product specifications. In Unmanned Aerial Vehicles (UAVs), Green ILC’s dynamic adjustment of Q and R significantly improves system reliability and broadens its application scope. During missions in complex terrains or under variable weather conditions, increasing the weight of Q emphasizes trajectory tracking accuracy, ensuring flight safety. Conversely, for long-distance missions, elevating the weight of R reduces energy consumption, thereby extending flight duration. This flexibility allows Green ILC to adapt to diverse scenarios, including logistics, environmental monitoring, and disaster relief, thereby enhancing UAV system reliability and versatility. In aerospace applications, Green ILC demonstrates substantial potential. During satellite deployment, increasing the weight of Q ensures precise positioning. For extended space missions, adjusting R to lower energy consumption can prolong operational lifespan. This adaptability makes Green ILC suitable for various aerospace tasks, such as satellite orbit control, spacecraft docking, and planetary exploration, thereby improving system reliability and expanding its range of applications.
Furthermore, Green ILC addresses sensitivity to initial conditions through its adaptive learning mechanism. By integrating the iterative refinement capabilities of traditional ILC with the optimization strengths of gradient descent, Green ILC dynamically adjusts control inputs based on feedback from previous iterations. This adaptive process allows the system to progressively correct deviations caused by suboptimal initial conditions, leading to improved convergence and performance over time. As a result, Green ILC reduces the impact of initial state variations, ensuring robust and consistent control outcomes across different starting points.
However, in high-dimensional nonlinear systems, ILC methods, including Green ILC, can encounter challenges related to computational efficiency and convergence speed. For instance, the study [20] examines the difficulties in achieving robust convergence in the presence of non-repetitive uncertainties, underscoring the limitations of ILC approaches in complex nonlinear environments.
To assess actuator reliability and safety, it is essential to evaluate actuator wear by assessing input energy variance and thermal stability. High variance in input energy indicates fluctuating loads, leading to increased wear. Monitoring this variance provides insights into the actuator’s degree of wear. Green ILC optimizes control strategies to reduce energy consumption, thereby decreasing load fluctuations and operating temperatures. This reduction not only minimizes wear but also enhances thermal stability, indirectly extending the actuator’s service life and improving reliability and safety.
Both papers [18,19] provide experimental results confirming the advantages of gradient-based ILC methods, supporting the performance improvements introduced by Green ILC. The paper [18] validates a model-free gradient ILC in stroke rehabilitation tasks, achieving monotonic convergence with tracking errors up to 1000 times smaller than traditional linear model-free ILC methods. The paper [19] demonstrates a reduction in the standard deviation of normalized steady-state errors from 0.224 in traditional proportional ILC to 0.009 in gradient-based ILC, showcasing enhanced robustness and adaptability for nonlinear systems. These findings, together with the energy optimization and tracking improvements of Green ILC, underscore the ability of gradient-based methods to outperform traditional ILC by enhancing convergence, reducing errors, and addressing energy efficiency in complex control problems.
The Green ILC method is based on the assumption of a known linear model with fixed matrices A , B , and C . While this framework offers mathematical simplicity and ensures convergence, it restricts the method’s applicability to systems with nonlinear or uncertain dynamics. In practical applications, linear models are often used as approximations of system behavior within specific operating ranges. To overcome these limitations, future research could focus on developing adaptive or model-free extensions, which would allow the method to handle time-varying or nonlinear systems effectively. Furthermore, integrating Green ILC with robust control strategies or predictive techniques could significantly enhance its applicability and adaptability in dynamic and complex environments.

5. Conclusions

In this paper, Green ILC is introduced as an innovative hybrid control strategy that effectively optimizes energy consumption while maintaining satisfactory tracking accuracy. By combining the principles of gradient descent optimization with iterative learning, the algorithm delivers significant improvements in convergence speed and energy efficiency compared with traditional ILC methods. Simulation results highlight reductions in energy usage and enhanced robustness in dynamic environments.
The practical applications of Green ILC are extensive, spanning robotic manipulators, UAVs, and aerospace systems, where energy efficiency and system reliability are of paramount importance. To maximize its potential in power system design, adaptive weighting strategies should be employed to minimize energy fluctuations and reduce actuator wear. Integrating Green ILC with renewable energy systems could improve stability under variable power conditions, making it a valuable solution for sustainable energy management.
Future research should focus on extending the applicability of Green ILC to nonlinear and time-varying systems, incorporating predictive models to enhance adaptability, and validating its performance in real-world scenarios. These advancements would further establish Green ILC as a practical approach to addressing energy-efficient control challenges.

Author Contributions

Conceptualization, Y.D.; methodology, Y.D.; software, Y.D.; validation, Y.D.; formal analysis, Y.D.; investigation, Y.D.; resources, Y.D.; data curation, Y.D.; writing—original draft preparation, Y.D.; writing—review and editing, Y.D. and E.P.; visualization, Y.D.; supervision, E.P.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support this research article are available from the author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Theorem A1
(Chain Rule [21]). Let f : R m R n and g : R n R be differentiable functions. Let us define the composite function h : R m R by h ( x ) = g ( f ( x ) ) , where x R m and y = f ( x ) . Then, the gradient of h with respect to x is given by
h ( x ) = g ( y ) · f ( x ) ,
where g ( y ) is the gradient of g with respect to y and f ( x ) is the Jacobian matrix of f with respect to x .
Theorem A2
(Gradient Descent Method [22]). Let f : R n R be a differentiable function. The gradient descent algorithm updates the parameter x R n iteratively to find a local minimum of f. The update rule is given by
x k + 1 = x k α f ( x k ) ,
where α > 0 is the step size and f ( x k ) is the gradient of f with respect to x at iteration k.
Theorem A3
(Taylor Series Expansion [23]). Let f : R n R be a twice continuously differentiable function at x 0 . The second-order Taylor expansion of f around x 0 is given by
f ( x ) f ( x 0 ) + f ( x 0 ) ( x x 0 ) + 1 2 ( x x 0 ) 2 f ( x 0 ) ( x x 0 ) ,
where f ( x 0 ) is the gradient of f at x 0 and 2 f ( x 0 ) is the Hessian matrix of f at x 0 .
Theorem A4
(Rayleigh Quotient Inequality [24]). Let H R n × n be a symmetric positive definite matrix, and let z R n be a non-zero vector. The quadratic form z H z satisfies the inequality
λ min ( H ) z 2 z H z λ max ( H ) z 2 ,
where λ min ( H ) and λ max ( H ) denote the smallest and largest eigenvalues of H , respectively.

References

  1. Djurdjanovic, D.; Mears, L.; Niaki, F.A.; Haq, A.U.; Li, L. State of the art review on process, system, and operations control in modern manufacturing. J. Manuf. Sci. Eng. 2018, 140, 061010. [Google Scholar] [CrossRef]
  2. Ang, K.H.; Chong, G.; Li, Y. PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 2005, 13, 559–576. [Google Scholar] [CrossRef]
  3. Song, X.; Yan, X.; Li, X. Survey of duality between linear quadratic regulation and linear estimation problems for discrete-time systems. Adv. Differ. Equ. 2019, 2019, 90. [Google Scholar] [CrossRef]
  4. Doyle, J. Guaranteed margins for LQG regulators. IEEE Trans. Autom. Control 1978, 23, 756–757. [Google Scholar] [CrossRef]
  5. Lehtomaki, N.; Sandell, N.; Athans, M. Robustness results in linear-quadratic Gaussian based multivariable control designs. IEEE Trans. Autom. Control 1981, 26, 75–93. [Google Scholar] [CrossRef]
  6. Tatjewski, P. Advanced Control of Industrial Processes: Structures and Algorithms; Springer: London, UK, 2007. [Google Scholar] [CrossRef]
  7. Shakya, A.K.; Pillai, G.; Chakrabarty, S. Reinforcement learning algorithms: A brief survey. Expert Syst. Appl. 2023, 231, 120495. [Google Scholar] [CrossRef]
  8. Faedo, N.; Olaya, S.; Ringwood, J.V. Optimal control, MPC and MPC-like algorithms for wave energy systems: An overview. IFAC J. Syst. Control 2017, 1, 37–56. [Google Scholar] [CrossRef]
  9. Owens, D.; Hätönen, J. Iterative learning control—An optimization paradigm. Annu. Rev. Control 2005, 29, 57–70. [Google Scholar] [CrossRef]
  10. Lee, J.H.; Lee, K.S. Iterative learning control applied to batch processes: An overview. Control Eng. Pract. 2007, 15, 1306–1318. [Google Scholar] [CrossRef]
  11. Bristow, D.; Tharayil, M.; Alleyne, A. A survey of iterative learning control. IEEE Control Syst. Mag. 2006, 26, 96–114. [Google Scholar] [CrossRef]
  12. Ahn, H.S.; Chen, Y.; Moore, K.L. Iterative Learning Control: Brief Survey and Categorization. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2007, 37, 1099–1121. [Google Scholar] [CrossRef]
  13. Thanh Cao, T.; Doan Nguyen, P.; Hoai Nguyen, N.; Thu Nguyen, H. An indirect iterative learning controller for nonlinear systems with mismatched uncertainties and matched disturbances. Int. J. Syst. Sci. 2022, 53, 3375–3389. [Google Scholar] [CrossRef]
  14. Amari, S.I. Backpropagation and stochastic gradient descent method. Neurocomputing 1993, 5, 185–196. [Google Scholar] [CrossRef]
  15. Sogo, T.; Adachi, N. Iterative learning control based on the gradient method for linear discrete-time systems. IFAC Proc. Vol. 1999, 32, 4729–4734. [Google Scholar] [CrossRef]
  16. Owens, D.; Hatonen, J.; Daley, S. Robust Gradient-based Iterative Learning Control. In Proceedings of the 2007 IEEE International Conference on Networking, Sensing and Control, London, UK, 15–17 April 2007; pp. 163–168. [Google Scholar] [CrossRef]
  17. Chu, B.; Owens, D.H.; Freeman, C.T. Predictive gradient iterative learning control. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 2377–2382. [Google Scholar] [CrossRef]
  18. Huo, B.; Freeman, C.; Liu, Y. Model-free Gradient Iterative Learning Control for Non-linear Systems. IFAC-Pap. 2019, 52, 304–309. [Google Scholar] [CrossRef]
  19. He, Z.; Pu, H. A gradient-descent iterative learning control algorithm for a non-linear system. Trans. Inst. Meas. Control 2024, 01423312241247873. [Google Scholar] [CrossRef]
  20. Xu, Q.Y.; Wei, Y.S.; Cheng, J.; Wan, K. Adaptive ILC Design for Nonlinear Discrete-time Systems with Randomly Varying Trail Lengths and Uncertain Control Directions. Int. J. Control Autom. Syst. 2023, 21, 2810–2820. [Google Scholar] [CrossRef]
  21. Cheney, W. Analysis for Applied Mathematics; Springer: New York, NY, USA, 2001. [Google Scholar] [CrossRef]
  22. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  23. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; US Government Printing Office: Washington, DC, USA, 1968; Volume 55. [CrossRef]
  24. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar] [CrossRef]
Figure 1. (a) Desired trajectory y r and actual trajectories y at the 2nd, 5th, and 30th iterations. (b) Tracking error | e | at the 2nd, 5th, and 30th iterations. (c) Control input u at the 2nd, 5th, and 30th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Figure 1. (a) Desired trajectory y r and actual trajectories y at the 2nd, 5th, and 30th iterations. (b) Tracking error | e | at the 2nd, 5th, and 30th iterations. (c) Control input u at the 2nd, 5th, and 30th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Sensors 24 07787 g001
Figure 2. (a) Desired trajectory y r and actual trajectories y at the 5th, 10th, and 60th iterations. (b) Tracking error | e | at the 5th, 10th, and 60th iterations. (c) Control input u at the 5th, 10th, and 60th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Figure 2. (a) Desired trajectory y r and actual trajectories y at the 5th, 10th, and 60th iterations. (b) Tracking error | e | at the 5th, 10th, and 60th iterations. (c) Control input u at the 5th, 10th, and 60th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Sensors 24 07787 g002aSensors 24 07787 g002b
Figure 3. (a) Desired trajectory y r and actual trajectories y at the 2nd, 5th, and 60th iterations. (b) Tracking error | e | at the 2nd, 5th, and 60th iterations. (c) Control input u at the 2nd, 5th, and 60th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Figure 3. (a) Desired trajectory y r and actual trajectories y at the 2nd, 5th, and 60th iterations. (b) Tracking error | e | at the 2nd, 5th, and 60th iterations. (c) Control input u at the 2nd, 5th, and 60th iterations. (d) Total cost J, error cost J 1 , and input cost J 2 for Green ILC over iterations. (e) Comparison of total cost J for traditional ILC and Green ILC over iterations. (f) Comparison of input cost J 2 for traditional ILC and Green ILC over iterations.
Sensors 24 07787 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dou, Y.; Prempain, E. Green ILC: A Novel Energy-Efficient Iterative Learning Control Approach. Sensors 2024, 24, 7787. https://doi.org/10.3390/s24237787

AMA Style

Dou Y, Prempain E. Green ILC: A Novel Energy-Efficient Iterative Learning Control Approach. Sensors. 2024; 24(23):7787. https://doi.org/10.3390/s24237787

Chicago/Turabian Style

Dou, Yu, and Emmanuel Prempain. 2024. "Green ILC: A Novel Energy-Efficient Iterative Learning Control Approach" Sensors 24, no. 23: 7787. https://doi.org/10.3390/s24237787

APA Style

Dou, Y., & Prempain, E. (2024). Green ILC: A Novel Energy-Efficient Iterative Learning Control Approach. Sensors, 24(23), 7787. https://doi.org/10.3390/s24237787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop