Design and Implementation of an Accelerated Error Convergence Criterion for Norm Optimal Iterative Learning Controller

: Designing an optimal iterative learning control is a huge challenge for linear and nonlinear dynamic systems. For such complex systems, standard Norm optimal iterative learning control (NOILC) is an important consideration. This paper presents a novel NOILC error convergence technique for a discrete-time method. The primary effort of the controller is to converge the error efficiently and quickly in an optimally successful way. A new iterative learning algorithm based on feedback based on reliability against input disruption was proposed in this paper. The illustration of the simulations authenticates the process suggested. The numerical example simulated on MATLAB@2019 and the mollified results affirm the validation of the designed algorithm.


Introduction
Many of the industrial process operating systems follow the same operation mode, where they once again completed a specific task within a particular time interval. For example, in industrial production processes, the continuous batch production works are usually involved. That is, the system completes a batch production under a given procedure within the required time interval and then repeats it again and again. It is evident for a system with continuous operations divided into batches. If the operation time of each batch has the same length for different quantities of action, then operation statistics and experience may be used to adjust the entire operating strategy for the next batch. This core and main idea lead us to "learning" that further tends to iterative learning control (ILC), which is an essential branch of intelligent control [1,2]. The human brain system inspires the learning algorithm. For example, a coach of a team commanded an ideal repetitive physical exercise task to the group. Then, the whole team tries to converge to that particular commanded exercise motion. Once the job has been performed after short period of time through sufficient trials or repetitions, the automatic program generated in the central nervous system (CNS), this further produces a sequence of input signals that stimulates the internal muscles and tendons according to the ideal motion pattern and approaches the required form of motion.
ILC is a universal control strategy framing the studies of the learning process of a human being, in which the main idea is to learn the time in which the consistent factors of the operation of the system based on different data full batches track the performance to become gradually better [3]. This control strategy is less dependent on system information and thus, generally, data-based control procedures, which effectively handle deals with traditional control challenges such as high-precision, steady connection, modeling difficulties, and efficient tracking performance. The basic approach is for a robot that performs trajectory tracking tasks in a finite time interval, the error correction control information input of the current or previous trial used to make the repetitive task do better in the next operation. These repetitions continue until the trajectory of the output over the entire time interval tracks the desired path [4]. If there is an error that exists from trial to trial, this is a very challenging task to overcome. To cope with this problem, Arimoto, the inventor of iterative learning control, proposed that both the current and the previous trial data that further used to generate proper and precise control action [5,6].
The ILC is suitable for controlled objects with repetitive motion tasks, and iteratively corrects the improvement of a precise control target. Iterative learning control does not depend on the system's accurate mathematical description model. Within a finite time interval, a nonlinear highcoupling dynamic system control with high uncertainty can easily track the given expectation precisely [7]. This kind of trajectory tracking is used effectively in the field of motion control. ILC control adapted to systems that can be continuously repeated in operation during a finite time. Often used to track a specific target, the tracking target tends to remain the same. Is it very similar to repetitive control? The ILC control method is an intelligent control method, which can continuously learn evolution by increasing the number of iterations to adapt to the optimal control method. ILC does not need to care about the precise control model of the system, but it can achieve very high precision control performance with straightforward control.

ILC history and Background
Uchiyama in 1978 first proposed the idea of iterative learning control (ILC) but the paper was written in Japanese, so as a whole the impact was not so impressive. Furthermore, in 1984, Arimoto [1] introduced the method in English. This refers to a control method that continuously repeats a control attempt of the same trajectory and corrects the control law to obtain a very good control effect. It is worth noting that around 1984, five teams reported the fact that the repetition by the system can be utilized to improve performance. These teams follow:  [11].
The characteristic of iterative learning control is "learning in repetition", and through repeated iterative correction, the purpose of improving the control effect is achieved. The basic key principle of iterative learning is the error is taken from the previous output and correcting it for the next trial. The control input is continuously corrected during the control process. As the iteration progresses, the next control input is gradually improved to make the control effective. Iterative learning control requires only a small amount of previous information and is an effective control strategy for dealing with systems with unknown models and repeated motion properties. ILC control is adapted to systems that can be continuously repeated during operation. It is often used to track a specific target, and the tracking target tends to remain the same. Is it very similar to repetitive control? The ILC control method is an intelligent control method, which can continuously learn evolution by increasing the number of iterations to adapt to the optimal control method. ILC does not need to care about the precise control model of the system, but it can achieve very high precision control performance with very simple control.
The basic approach is for a robot that performs trajectory tracking tasks in a finite time interval, the error correction control information input of the previous or previous operations is used to make the repetitive task do better in the next operation. This is repeated continuously until the trajectory of the output over the entire time interval tracks the desired trajectory. The ILC is suitable for the controlled objects with repetitive motion tasks, and iteratively corrects the improvement of a definite control target. Iterative learning control does not depend on the system's accurate mathematical description model. Within a finite time interval, a nonlinear high-coupling dynamic system control with high uncertainty can easily track the given expectation precisely. This kind of trajectory tracking is effectively used in the field of a system with motion control. ILC ensued several valuable results in both theory and applications in these three decades of evolution. It was concluded from these [12][13][14][15][16][17] survey papers that it is the key factor for the dynamics of the system such as the same tracking reference, the same operation length, and the same initial state, which can reduce the proposed update law as well as improve the tracking performance.
To date, for the ideal trajectory tracking, the present maximum ILC work requires that the length of the trail is permanent, and it is constant in each iteration [18][19][20]. However, in many ILC engineering applications, fixed requirements of the uniform trail length may not be satisfied. The span of the system output, states and control input will be due to complexity; each iteration will randomly change factors or random events. Recent studies about variable trail length are discussed and try to make the controller robust [21][22][23][24][25][26][27]. In a dynamic system with iteratively varying path lengths, other ILC algorithms based on iterative moving average operators can also be found [22,25,26]. Recently, in [28], ILC tracks the length of different trajectories of discrete-time systems to ensure that the p-type ILC scheme modifies the deterministic convergence method. Current ILC studies based on different trajectory lengths usually assume that the trajectory lengths of states, control inputs and outputs are the same in a particular iteration. As for the problem of changing the trajectory length, in most ILC algorithms, it is generally considered that the country length, control input and output are the same at particular iterations. In many practical applications, people expect that the controlled system can achieve the control goal without extra effort. For example, for a car or robotic speed control, when the speed control is to be close to the desired value of the speed, the system can operate freely without any extra control effort. It indicates that the control input in the specific time limit can be removed during the repetitive process. This means a repeating system with a randomly variable input trial length.
Accelerated error convergence is always a challenging task in a dynamical and maneuverable system where external disturbances are dominant such as the precise trajectory tracking of the Omniwheal mobile robot for a highly maneuverable model [29]. The author experimentally proved and described the PID (proportional integral derivative) type control to compensate for the disturbances. Newton-Euler's method and its stability for the hexacopter nonlinear system proposed to eliminate the instabilities [30]. In addition, unconventional control methods are adopted to improve the tracking accuracy and performance of the soft robot manipulator. Open-loop control based on learning, relying entirely on mechanical feedback, demonstrated a pneumatic soft manipulator in [31]. The author of [32] achieved a substantial decrease in the tracking error of fabric-based soft arms by using model predictive control. The control method of [33] is an extended neural network-based nonlinear model predictive controller used to describe dynamics. In [34], a reinforcement learning method was applied together to optimize the stiffness and precise position tracking of the manipulator for auxiliary applications.
This paper is organized to propose novel error convergence topology through the aforementioned ILC methodology, and a typical norm optimal iterative learning error convergence control algorithm is presented. A novel optimal criterion is precise and robust for the input disturbance accommodation. The core advantage of the proposed algorithm is the fast error rate convergence that is a vital concern for every controller design. The new algorithm is very effective and robust for the input disturbances. Simulation results illustrate the optimal control algorithm validation.

Norm Optimal ILC
The main objective of the iterative learning control algorithm is to determine an ideal input for the system under trial [35][36][37][38]. The plant output should track the desired trajectory output when this input is applied. This input should produce output that follows the specific required trajectory as precisely as possible. The term "as precisely as possible" states that the error should be as small as possible. Furthermore, the difference between the measured output and the desired output should be as small as possible. The primary and vital role of the ILC algorithm is to minimize the error to ultimately approach zero. Thus, it shows that the ILC control law works in two dimensions, both the time index and iteration index. To overcome this issue, a cost function is proposed for the system under trial: where the optimal criterion cost function is defined as follows: where ( ) and ‖ − ‖ are the cost function for the current trial input and the difference of norm for the present and preceding input trial, respectively; ‖ ‖ shows the error norm of the present trial. and are the spaces of both and vectors, respectively, which are isometrically equivalent on 1, − 1 and 1, .
For simplicity, the most familiar sums and performance index of the norm are understood as follows: and are the weight matrices that must be positive symmetric definite in the aforementioned Equations (1) and (3) for the whole time , assuming that and are the diagonal matrices and for the positive real numbers and . Further computation of the cost function is described as follows: In the formula, ( ) and ( ) are, respectively, the error weight and control law variation weight matrix at time , and at any time , ( ) > 0 and ( ) > 0. Constructing an n-dimensional real symmetric positive definite matrix , , whose values on the diagonal are respectively ( ) and ( ): The purpose of this performance index is to find an optimal , which can minimize the tracking error and ensure that the deviation from is not too large. How to balance these two goals can be decided by designing and . can be obtained by taking the partial derivative of the performance index; Since is a positive definite real symmetric matrix, it is invertible, and the solution Formula (6) provides the optimal control law update equation: (7) * = is defined by the following theorem. Theorem 2: if there is * . . . , then → 0 when the number of iterations goes to infinity, that is, the designed controller can realize the full tracking of the expected trajectory.
It is proven that according to the tracking error = ( ) − during the + 1 iteration, the updating equation of tracking error on the iteration axis can be calculated as Since there is * . . . , it is obvious that … and the error will converge to zero. At the same time, the control law designed according to this optimal index can change the value of 2 by adjusting the gain matrix Q and R, and then adjusting the convergence rate of the tracking error. A literature study shows that there is a lot of cost function computation and solving ILC problems. Some arguments prove the appropriation of the cost function (Equation (2)) and its effectiveness: (i) The choice to ‖ − ‖ demonstrates that the deviation in the input from trial to trial is minimal. This means that the convergence is automatically smooth which further leads to produce the stable control input signal to manipulate the actuator behavior. (ii) The choice of the factor ‖ ‖ shows the reduction in the tracking error from trial to trial.
(iii) The cost function is dependent on both aforementioned terms, and uses and weight matrices. The ratio = has to be significant to keep the deviation very small. This ration is chosen as little as possible to make the error the lowest. The terms small and large depend on system constraints and the system to be used. (iv) Choosing − = 0 makes the optimal value of the cost function bound and becomes ( ) = ‖ − ‖ + ‖ ‖ = ‖ ‖ , which further results in the optimal amount ( ) − ‖ ‖ ≤ 0.
Hence, the optimal value has a lower bound: ( Combining the above two equations resulted as The input update law can be obtained easily by differentiating the cost function concerning : where and represents the current and previous trial input, respectively. shows the error of the ongoing trial and * = is the required weight of the cost function.

ILC Algorithm Design and Practical Application
The design process for the aforementioned problem norm optimal iterative learning controller comprises as following steps: Step 1: ILC casual implementation is possible, usually when there is a full state knowledge of the system. The ideal control is changed by writing for the costate: Typically, a standard strategy as taken in [28,39], adopted to calculate the feedback gain ( ), is taken from the renowned equation of the discrete Riccati solution that can be defined as follows [40]. Step2: ... × ( + 1) + ( + 1)) .
For this equation, the terminal condition is ( ) = 0. This is also independent of the system states, inputs, and outputs. Hence, the feedback gain can be solved offline before the trial sequence is initiated.
Step 3: The feedforward action is called the predictive term ( ) which is produced by With the terminal condition ( ) = 0. The predictive term consequently determined by the following error on the previous (i.e., ith trial).
Step 4: The input update law is for the aforementioned steps described as

( ) = ( ) − [ ( ) + ( )} ( ). { ( ) − ( )} + [ ( ). ( )] (15)
Normally, a typical iterative learning control algorithm comprises the (I + 1)th trial full stateinput joint with a feedforward term from the last trial error data. This representation of the ILC algorithm is non-casual since Equations (8) and (9) can be solved offline. For each trial, it is a switch time simulation that utilizes available previous trial information. The block diagram of the proposed controller design algorithm is shown in Figure 1. As it can be observed, the original prediction error values are fed into the workflow at the same time as the first stage information gain matrix by the ( ) calculation. The workplace is also being passed on. The second step uses these two input sources and initializes all the information which was saved in the workspace till now. During the process of the second step, the values for the predictive term array are provided to the third section, in sequence. Finally, all workspace values are referenced in the third stage and operated on a system offline to control the plant.
The availability of the feedback full state knowledge assures the implementation of norm optimal iterative learning control. Both of the parameters and must be selected intuitively, keeping in mind the error norm. The convergence property depends on the choice of the cost function which signifies the performance of both the error sequence and input rate change. To demonstrate the effect of and , the following assumptions are made: i-Keeping fixed; ii-= ( . ) where > 0, (0, ); iii-The variable parameter ( > 0).
This shows that the variable has full control of the proposed algorithm. The error converges subsequently by changing the parameter .

Practical Application of NOILC Scheme for Piezo Motor Position Tracking
Different mathematical models have been presented for practical applications of piezoelectric actuators [41][42][43]. However, the researcher modified and proposed a novel Preisach model to simulate the hysteresis effect in a piezoelectric stack actuator and eventually experimentally proved the feasibility of the model.
For the standard norm optimal iterative learning control (NOILC) criteria for the faster rate of error convergence and to prove the efficacy of the proposed iterative learning algorithm to control the existence of unknown dead-zone nonlinearity, simulations have been conducted with a linear model of a piezoelectric, as shown in Figure 2. There are many auspicious applications of motors in the industry. Piezoelectric motors have the characteristics of low speed and high torque, which are insignificant differences with the features of typical electromagnetic motors such as high speed and low torque. In addition, features like a compact structure, light weight, and the quiet operation of the piezoelectric motor makes them resilient to external magnetic as well as radioactive fields. Piezoelectric motors are widely used for high-precision control applications because they can easily reach the accuracy of the level of micrometers or even the level of nanometers. Such a kind of control brings additional difficulties to the establishment of a precise mathematical model of the piezoelectric motor: slighter nonlinearity and unknown factors significantly disturb the control and features' characteristics of the motors. Specifically, compared with the allowable level of the input signal, the piezoelectric motor has a large dead zone, and the size of the dead zone changes with position displacement, the control scheme for position tracking, where indicates the desired value and shows the measured output. This also denotes that the feedforward term is and the feedback term while the error is . ILC (Iterative Learning Control) means the control law applied from first to last iteration as shown in Figure 2.
where the full state knowledge is assumed and shows the measured values of angular position and velocity. The simulation results are shown in Figure 3, where we have taken = 0.001. The reference trajectory is taken as ( ) = 20{1 + 3 sin(0.35 )} , ∈ (0,1,2, … ,100) , where the position displacement and velocity were tracked. The reference preserves the system far beyond the linearization point and guarantees that the nonlinearity disturbs the system dynamics. Since the algorithm contains a feedback term, no additional pre-compensation is needed. It is worth noting that despite using a linear model, even though using improper physical constants in this model, the algorithm attains a rapid rate of error convergence, as shown in Figure 4. It is evident that the proposed algorithm is robust. In almost all cases, , the norm ore, can be controlled by the parameter . Only plants with non-minimum phases show a slower convergence rate.  As mentioned above, the hysteresis effect makes the system complex in order to track the precise position. The proposed controller has satisfactory results compared to the typical PID [44], inverse system control [45] method and robust control [46] as well as sliding mode control [47]. A further experimental approach for the proposed controller can be adopted using any of the aforementioned models to verify this robust NOILC technique.
In general, a new iterative learning control algorithm was discovered to be fruitful to achieve a rapid rate of error convergence.

Higher Order Linear Discrete Time Plant Simulation Example
In this section, a numerical example is used for the validation of the proposed control algorithm, considering a linearized discrete-time system as follows: The desired trajectory is = 0.4( )sin (2 / ) for the interval = [0, ] where ∈ (0,1,2,3, … ,10) and = 10. Initially, the input of the system is =0.6. The parameters and are both set to be 1; only the variable changed. It is supposed that the full state knowledge of the system is taken into consideration. Input for the different trials is shown in Figure 5.

Simulation Results and Discussions
It can merely be observed that the performance of convergence is better for the given system. The output of the plant after 20 or more iterations follows the desired trajectory . The output for different trials is shown in Figure 6. The parameter set to be 0.8384. The error convergence results for both values is shown in Figure 7. The convergence rate is faster for the smaller as compared to a bigger value. This shows that the design of the controller is robust and considers the little value as much as possible for better and precise convergence. It can observe that as the number of trials increases, the rate of convergence increases accordingly. The prediction change of the reference further modifies the control action in an optimal method. Eventually, the output follows the desired trajectory after the 10 th iteration more precisely because of the smaller value of , as shown in Figure 8. This shows that the solution is more robust against different input trials. Because the control action is highly convergent and the error converges at a faster rate, the reference trajectory follows the output too quickly and the time settling is very short, as shown in Figure 7. The rate of error convergence for the proposed norm optimal ILC for the whole trials shown in the two following Figure 8a,b. This shows that the convergence is guaranteed even for the higher order system. The error convergence rate and the error norm shown in Figure 8a indicate that a smaller value tends to converge an error faster, and if it is large, the error converges at a slower rate as shown in Figure 8b.
Alternatively, the limitations of the algorithm can be considered as follows. Although this algorithm can effectively deal with the noise output because it is downstream of the control action, it still needs to be modified as fast as possible in the future. The robustness of the algorithm for nonlinear dynamical systems has not been proven theoretically. The algorithm can only be implemented when the plant model is known in advance and the algorithm is only valid for reversible systems.

Controller Robustness against Input Disturbance
Taking the same plant model and disturbance added in the system, the existence of white noise signifies the disturbance behavior. In the Simulink model used for the current simulation, the location of the noise entry point is given. The influence of disturbance on the error evolution is shown in Figure 9a The result shows that under the influence of input white noise, the error still decreases exponentially. A convergence of errors is fast for the convergence rate of the first trial, and so on. However, even the first ten trials are enough to make the tracking error constraint reach the ideal value limit. Through the observation of the error surface graph, this assumption has been strengthened. After 10 trials, it is evident that the error range is small in size, as shown in Figure 9b. Therefore, the algorithm successfully controlled the controlled object despite the presence of noise in the input.

Controller Validation for Dynamical Nonlinear System
For the effectiveness of the proposed controller design, we suppose the nonlinear Permanent Magnet Synchronous Motor (PMSM) dynamical model that can be described as follows: = where M is the weight of the load, is the damping coefficient, , are the moving position and velocity, respectively, is the disturbance acting on the load, is the friction, denotes the thrust ripple of the alveolar, and is the thrust ripples reluctance. The electromagnetic force is responsible for eliminating these thrust ripples.
The nonlinear permanent magnet synchronous motor model in Figure 10 is decoupled and transformed into a new linear system through feedback linearization structure. This satisfies the linearization conditions and the exact sampled time linearized model [7] described as follows: where ( ) and the sampling time = 0.01 , the desired trajectory is = 5sin ( ) for the interval [0,1.5]. As shown in Figure 11a, the output of the nonlinear dynamical system tracks perfectly the desired trajectory in a smooth way. Eventually, the error converges to zero. The value of the control parameter can be changed to achieve the target sequence and it can be seen that the algorithm is highly effective and valid for the dynamical systems as well.  The rate of error convergence is shown above in Figure 11b. It can be observed that the error convergence for the first five trials is very fast but slowly goes to zero from 5 to 10 trials. The convergence also depends on the system as well. If the system is highly dynamical and has noise, the error convergence is also slower but ultimately goes to zero. Finally, the proposed algorithm is effective and the error exponentially converges quickly. Thus, it can be implemented in many practical applications such as aircraft maneuvering, altitude control, XY plotter system, robot manipulations, satellite control, position control, power electronic control and in many automation industrial applications.

Conclusions
An optimization method-based error convergence criterion in iterative learning control was discussed and the controller design methodology proposed in this paper. The fallouts showed the usual implementation of an iterative learning control algorithm for the linear discrete-time plant. This algorithm also works properly for a nonlinear discrete time system as well. The feedback and feedforward action (current and previous trial) arrangement is the leading topology in this proposed algorithm. The main advantage of this research shows the ability of this optimal criterion against disturbances' rejection and the convergence of error. The proposed method is robust compared to typical basic linear quadratic regulators (LQR). The rate of error convergence is the core attention in every control algorithm. The proposed method is valid for linear and nonlinear control systems and able to converge error at faster rate ultimately, which tends to reach its convergence limit. Robustness theory and practical application for this typical optimal control algorithm for the nonlinear plant have future concern such as: