Next Article in Journal
Mechanism and Application of Soilbags Filled with Excavated Soil in Soft Soil Subgrade Treatment
Previous Article in Journal
Formal Analysis of DTLS-SRTP Combined Protocol Based on Logic of Events
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Quality of Industrial Robot Control Using an Iterative Learning Method with Online Optimal Learning and Intelligent Online Learning Function Parameters

1
Department of Control and Automation, Faculty of Electrical Engineering, University of Economics—Technology for Industries, Hanoi 11622, Vietnam
2
Faculty Control and Automation, Electric Power University, Hanoi 11917, Vietnam
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(5), 1805; https://doi.org/10.3390/app14051805
Submission received: 10 January 2024 / Revised: 18 February 2024 / Accepted: 19 February 2024 / Published: 22 February 2024

Abstract

:
It is inevitable that the characteristics of a robot system change inaccurately or cannot be accurately determined during movement and are affected by external disturbances. There are many adaptive control methods, such as the exact linearization method, sliding control, or neural control, to improve the quality of trajectory tracking for a robot’s motion system. However, those methods require a great deal of computation to solve the constrained nonlinear optimization problem. This article first presents some techniques for determining the online learning function parameters of an intelligent controller, including two circuits: the inner circuit is an uncertain function component estimator to compensate for the robot’s input, and the outer circuit is an iterative learning controller and does not use a mathematical model of the robot with optimal online learning function parameters. The optimal condition is based on the model in the time domain to determine the learning function parameters that change adaptively according to the sum of squared tracking errors of each loop. As for the intelligent online learning function parameters, they closely follow the general model to stabilize the robot system, based on the principle of intelligent estimation of the uncertainty component and total noise. This method is built on Taylor series analysis for the state vector and does not use a mathematical model of the system at all. It allows feedback linearization, as well as intelligent stabilization of the system. This article’s content uses a 2-DOF flat robot implemented on MatlabR2022b software to verify the theory. These findings indicate that superior tracking performance is achievable.

1. Introduction

Robots have been widely used in many industrial fields, so countless robot control methods have been researched and applied, helping robots achieve accuracy. It cannot be prevented that, during movement, the robot system’s properties may change imprecisely or cannot be accurately identified and will be influenced by outside disturbances. The quality of trajectory tracking for robot motion systems may be enhanced using a variety of adaptive control techniques, including sliding control and neural control. However, those approaches involve a great deal of computing to solve the constrained nonlinear optimization issue. This article offers some methods for figuring out the parameters of an intelligent controller’s online learning function, using two circuits: the outer circuit is an interactive learning controller, which does not use the robot’s mathematical mode, and the inner circuit is a state feedback linearized controller. The parameters of the intelligent controller’s online learning function closely follow the joint. However, these methods are all built based on the mathematical model describing the robot, and they are classified depending on the level of accuracy of that mathematical model. The Euler—Lagrange model is a form of mathematical model often used for developing and synthesizing controllers, and it is impacted by external input noise d(t) as follows:
u _ d k + τ _ d = H ( q _ , θ ¯ ) q _ ¨ + C ( q _ , q _ ˙ , θ ¯ ) q _ ˙ + g ( q _ , θ ¯ ) + F ( q ˙ _ , θ ¯ )
In which q _ is the vector of joint variables; θ _ is the vector of uncertain parameters; H ( q _ , θ _ ) is the inertia matrix that is always positive definite symmetry; C ( q _ , q _ ˙ , θ _ ) is the vector of Coriolis and centrifugal components; F ( q ˙ _ , θ _ ) describes the effect of friction; g ( q _ , θ _ ) is the vector gravity; t _ d ( t ) is the noise vector in the actuator; and you is the control signal.
With the matching variables equal to the number of input signals, such a robot system has enough actuators. The robot control task is to build a feedback controller so that the output is the joint variables q _ that follow the desired trajectory R _ ( t ) and the tracking quality does not depend on uncertain parameters and components and external noise τ _ d ( t ) . Over time, due to many objective factors, such as equipment wear and tear from environmental impacts, the initially set design quality is no longer guaranteed. In this case, it is common to rebuild the controller. Currently, one unchangeable aspect of designing an industrial robot motion control system via the traditional approach is that the control object must always be understood clearly. This means that the control object needs to be controlled and expressed in modeling in the form of a mathematical model that precisely describes the object; in this case, this model can take the form of a transfer function or a state model represented by a system of differential equations/highest classification. At the same time, when building and designing a motion control system, it is necessary to anticipate objective factors that will not impact the system as expected, such as external interference, etc., leading to damage to the system. The control is no longer as effective as before, and one must redefine the mathematical model of the control object, re-evaluate the rules of unwanted effects, bring objectivity into the system, rebuild the controller, or, at least, redefine the parameters for the controller. There have been many control methods to solve the above problem, for example, the exact linearization method [1]. If the external noise component q _ passes but the indeterminate constant parameter still exists, we have the inverse control method of the model [2]. The reverse learning or Li-Slotine method applies when the parameter q is arbitrary. The song must be constant and must have t _ d ( t ) = 0 [3]. In cases where both external noise components and uncertain constant parameters exist, there is a sliding control method [4,5]. On the other hand, chattering occurs when the system slides on the sliding surface too often, which is a drawback of the sliding control approach. To improve this vibration phenomenon, there is also a high-order sliding control method [6], but it still requires an estimated value of the maximum standard deviation of the model caused and cannot eliminate it. In addition, to reduce the chattering phenomenon, people use neural networks [7] to estimate the uncertainty components based on collecting information from robots (through q _ and its derivative q _ ˙ ) by the output estimation component and then uncertainty compensation through control signals. The intelligent control method mainly applied to robots mentioned in this article is the iterative learning method [8,9]. This integrated control method with cyclic working systems requires the same state. At the same time, the parameters q _ ( t ) and τ _ d ( t ) are also required to be periodic and have the exact change period as the working cycle of industrial robots. The primary iterative learning control method is only sometimes applied to meet the desired control quality. There have been many improvements to iterative learning to improve control quality and expand the scope of practical applications. The first improvement is the improvement of combining iterative learning with traditional control methods, often called indirect iterative learning or direct transmission iterative learning [9,10,11]. These improvements include [12] when the friction component alone cannot be determined [13], when that is not possible, or [14] when the learning function parameters cannot be found, as well as when the learning function parameters need to be changed. In relation to change during work cycles, a control trend for a class of robotic systems is model matching control, including [15]. This control method requires the mathematical model (1), which will encounter the problems and disadvantages of the previous traditional control methods. The next innovation is to use model-based control but to almost completely omit the mathematical model (1) of the robot, applicable to both cases where q _ ( t ) and τ _ d ( t ) are dependent. This depends on time and does not require periodicity, so it can be considered an intelligent model-matching control method. This method is based on theoretical results on the optimal control of each segment on the time axis available in [16] and applying disturbance estimation to control self-balancing two-wheeled vehicles [17]. The main content of this article is to analyze and evaluate additional improvements to essential iterative learning to improve the quality of motion control of robot systems or microrobot systems [18,19,20] and to expand the scope of practical applications, specifically. Intelligent control based on model matching (model matching) almost wholly does not need to use the mathematical model of the robot system, as applicable to the case where q _ ( t ) and τ _ d ( t ) depends on time, and there is no need for circulation for the robot system. The content of this article includes six parts. Part 1 is the problem of researching iterative learning for robot systems. Part 2 presents the results of the 2-DOF robot dynamic equation. Part 3 presents two robot control structures using iterative learning. Part 4 presents the first structure controls for the robot using iterative learning. Part 5 and Part 6 present the second structure controls for the robot using iterative learning. The conclusion and ideas for further research are covered in Part 7.

2. Robot Planar Dynamic Model

A 2-DOF planar robot arm is shown in Figure 1, where θ i , l i   and   m i , respectively.
The dynamic equations for this system can be found using the Lagrange–Euler formulation [1]. The dynamic model of the 2-DOF is:
u _ d k = H ( q _ , θ _ ) q _ ¨ + C ( q _ , q _ ˙ , θ _ ) + g ( q _ , θ _ )
With the following results:
H 11 = m 1 l C 1 2 + I 1 + m 2 l 1 2 + l C 2 2 + 2 l 1 l C 2 cos θ 2 + I 2 H 12 = m 2 ( l C 2 2 + l 1 l C 2 cos θ 2 ) + I 2 H 21 = H 22 = m 2 ( l C 2 2 + l 1 l C 2 cos θ 2 ) + I 2 m 2 l C 2 2 + I 2 ; C 11 = m 2 l 1 l C 2 sin θ 2 ( 2 θ ˙ 1 θ ˙ 2 + θ ˙ 2 2 ) C 12 = m 2 l 1 l C 2 θ ˙ 1 2 + θ ˙ 1 θ ˙ 2 sin θ ˙ 2 m 2 l 1 l C 2 sin θ 2 θ ˙ 1 θ ˙ 2 g 11 = m 2 g l 1 cos θ 1 + l C 2 cos ( θ 1 + θ 2 ) g 12 = m 2 g l C 2 cos ( θ 1 + θ 2 ) + m 1 l C 1 cos θ 1
Problems that arise in working in real-time, factors that cannot be ignored, include the impact of interference on the system and unwanted changes to model parameters. The control problem is the signal tracking control for the industrial robot motion system. The most comprehensive solution would be the method of using iterative learning control. This method can be added to a traditional control system to compensate for system errors, or it can also be applied immediately to robots without a conventional controller. The content of this article proposes two solutions to control industrial robots by iterative learning using innovative online learning parameters presented explicitly in Section 4, Section 5 and Section 6.

3. Two Structure Diagrams for Robot Control Using the Iterative Learning Method

The content of this article proposes two solutions to control industrial robots by iterative learning using innovative online learning parameters, as shown in Figure 2 and Figure 3.
These two solutions are different because we need to add an intermediate controller in the inner loop with the first solution, but we do not with the second solution. The task of this medium inner-loop controller is to make the industrial robot system adapt to the iterative learning control principle later because the iterative learning controller often requires the control object to be stable (with possibly no need for asymptotic stabilization). We need the mathematical model (1) of the industrial robot system to design an intermediate controller as a stable system. In the second solution, the uncertainty estimation block, in addition to the pure task of estimating the function uncertainty components to perform input compensation, must also carry out the additional function of making a stable system. If we apply the “smart” estimation principle introduced in Part 5 to design the uncertainty estimation block, then, with the second solution, we do not need the industrial robot system’s mathematical model (1). However, the price for this is that the processing speed of the second solution will be much slower than the first solution. According to the simulation results obtained, it is about 140 times slower. In addition, since both of these solutions use an uncertainty estimation block for compensation control, they will be applicable to the adaptive control of industrial robot systems with additional noise in the input, i.e., applying the method described by (4) and (15). This is also a difference from the original model (1) because interference in the actuators is unavoidable. Furthermore, it is possible to assume that this input disturbance can contain matched model uncertainties. In that case, both control solutions are adaptive to the trouble and robustness of model bias.

4. Structure of the First Diagram for Robot Control, the Iterative Learning Method

4.1. Algorithm Content According to the First Control Diagram

Take the mathematical model (1) and assume that C ( q _ , q ˙ _ ) , g ( q _ ) is undetermined. Then we have:
H ( q _ ) q ¨ _ = u _ d k + τ _ d
With the sum of all the functional uncertainty components of the model:
τ _ d = n _ ( q _ , q ˙ _ , q ¨ _ , t ) C ( q _ , q ˙ _ ) q ˙ _ g ( q _ )
It can be seen immediately that when used:
u _ d k = H ( q _ ) [ ϑ _ g 1 q _ g 2 q ˙ _ ]
With g 1 , g 2 being two properly chosen matrices, the system becomes:
q ¨ _ = ϑ _ g 1 q _ g 2 q ˙ _ + ς q ˙ _ q ¨ _ = 0 n I n g 1 g 2 q _ q ˙ _ + 0 n I n ϑ _ + ξ _
where ξ _ = M ( q _ ) 1 τ _ d is an uncertain component of system (7).
X _ = q _ q ˙ _ , g = 0 n I n g 1 g 2 , B = 0 n I n
The model (7) will have the canonical form:
X _ ˙ = G X _ + B ϑ ¯ + ξ _
In addition, for system (9) to be stable, that is, to have a Hurwitz matrix A, we must choose two matrices g 1 , g 2 so that the eigenvalues of A given in (8) lie to the left of the imaginary axis. The simplest way is to choose them according to the method of assigning pole points using Matlab’s place(.) command, and these pole points should be selected as accurate (negative) numbers to eliminate fluctuations during transitions.
And then the system output is:
Y _ = q _ = I n , 0 n X _
following the given periodic R _ ( t ) pattern trajectory.
Now we use notation: ϑ _ k ( n ) = ϑ _ k ( t ) | t = n t S , Y _ k ( n ) = Y _ k ( t ) | t = n t s
Given the input and output of systems (9) and (10), respectively, n t s at the time in the k test, with t s being the sample extraction cycle, then the P-type learning function, standard to all update time i at the k time, will be:
ϑ _ k + 1 = ϑ _ k + K e _ k
where:
ϑ _ k = c o l ( ϑ _ k ( 0 ) , ϑ _ k ( 1 ) , , ϑ _ k ( S 1 ) ) ; E _ k = ϑ _ ω _ k ; ω _ k = c o l ( Y _ k ( 0 ) , Y _ k ( 1 ) , , Y _ k ( S 1 ) ) ; υ _ k = c o l ( R _ k ( 0 ) , R _ k ( 1 ) , , R _ k ( S 1 ) ) ; K = K 0 n x n 0 n x n K nSxnS   a n d   K n x n
K is the matrix of learning functions that must be chosen so that the learning function converges.
The uncertainty component of the system, in general, can be estimated and explicitly compensated as follows: First, we must approximate model (9) at time i in the k time, then when paid by ξ ^ _ k ( n 1 ) , becomes:
X _ k ( n ) X _ k ( n 1 ) t s g X _ k ( n ) + B ϑ _ k ( n ) ξ ^ _ k ( n 1 ) + ξ ^ _ k ( n )
Hence, g ^ X _ k ( n ) X _ k ( n 1 ) + B ^ ϑ _ k ( n ) ξ ^ _ k ( n 1 ) + ς _ k ( n ) B ^ ξ ^ _ k ( n ) h _ k ( n )
In which: g ^ = I 2 n t s g , B ^ = t s B , h _ k ( n ) = g ^ X _ k ( n ) X _ k ( n 1 ) B ^ ϑ _ k ( n ) ξ ^ _ k ( n 1 )
So, from B ^ T B ^ = t s 2 I n , we have the estimated value ξ ^ _ k ( n ) ξ _ k ( n ) for the next offset at time n + 1 as:
ξ ^ _ k ( n ) = ( B ^ T B ^ ) 1 B ^ T h _ k ( n ) = 1 t s 2 B ^ T h _ k ( n ) = 1 t s 2 B T h _ k ( n ) = 1 t s ( 0 n , I n ) g ^ X _ k ( n ) X _ k ( n 1 ) B ^ ϑ _ k ( n ) ξ ^ _ k ( n 1 ) = 1 t s B T g ^ X _ k ( n ) X _ k ( n 1 ) ϑ _ k ( n ) + ξ ^ _ k ( n 1 )
The structure of the intelligent control system for industrial robots is shown in Figure 2.
Below is the Algorithm 1 describing the steps to calculate an iterative learning controller, combined with identifying and compensating uncertain components (with no need to be repetitive) for industrial robots after accurate linearization; after several tries k is continuous along with the control process and only ends when the control stops.
Algorithm 1: The first structure for robot control, the iterative learning method, is with function parameters smart online learning.
1Assign t = 0 , ξ _ ^ = 0 _ . Choose t s > 0 ; Calculate g ^ , S . Choose K;
Assign small value
x 0 = 0 , ξ ^ _ k = 0 _ , ϑ _ k = c o l ( ϑ _ k ( 0 ) , , ϑ _ k ( S 1 ) ) = c o l ( R _ ( 0 ) , , R _ ( S 1 ) ) .
2while continue the control do
3   for  n = 0 , 1 , , S 1  do
4     Send ϑ _ ( n ) ς _ ^ to into uncertain control (9) and determine X _ = X _ ( t + n t s ) .
5      Calculate ξ ^ _ k B T ( g ^ X ¯ X ¯ 0 ) / t s ϑ _ k ( n ) + ς _ ^   a n d   X ¯ 0 X ¯ .
6      Establish Y ¯ n = ( I n , 0 n ) X ¯ and calculate e _ ( n ) = R _ ( n ) Y _ ( n ) .
7 end for
8Set up the sum vector ε _ từ e _ ( n ) , n = 0 , 1 , S 1 theo (12)
9Update vector ϑ _ ( n ) from its existing value according to (11) that is, calculate the values ϑ _ ( n ) , n = 0 , 1 , , S 1 for the next try.
10Set t t + T
11end while

4.2. Applied to Robot Control

For the simulation, there are assigned:
g 1 = 30 0 0 12 , g 2 = 11 0 0 7 , K = 2.9 0 0 1.8 , t s = 0.05 s , T = 20 s
There are periodic functions of period T as follows:
R _ 1 ( t ) = min t s t / 2 ,   1.5   k h i   t T 2 R _ 1 ( T t ) k h i   T 2 < t T   ;   R _ 2 ( t ) = sin ( 2 π t T )
We obtain the simulation results in Figure 4, Figure 5, Figure 6 and Figure 7.
Comment. 
Figure 4 and Figure 5 depict the simulation results, which demonstrate that, when combined with the identification and noise compensation stages of the traditional controller (13), the iterative learning controller (11) has achieved a good signal tracking quality. The preset error with tracking error after 60 trials is small enough and acceptable:
0.03 e 1 ( t ) 0.065   and 0.017 e 2 ( t ) 0.02
The residual error e 1 ( t ) in the simulation results is due to the non-smooth trapezoidal signal set R _ 1 ( t ) at t 0 = 0 ; t 1 = 1.5 t s ; t 2 = T 1.5 t s ; t 3 = T . It also affects the tracking error e 2 ( t ) due to channel interleaving in the system.

5. The Structure of the Second Iterative Learning Controller with Model-Free Determination of Optimal Learning Parameters for an Industrial Robot

Stabilizing the system without using traditional control methods makes it possible to apply iterative learning control to even unstable systems without using a mathematical model of the system.
Tool: Estimating the derivative using Taylor. Besides Taylor analysis, there are many numerical methods to approximate the derivative, such as relying on Lagrangian interpolation or comparison with sample models, etc., as in [21,22,23] series analysis.

5.1. Algorithm Content According to the First Control Diagram

It is obvious that the Euler-Lagrange model (1) of robot manipulators is equivalent to
q _ ¨ = A 1 q _ A 2 q _ ˙ + u _ d k + μ _
where A 1 , A 2 are two arbitrarily chosen matrices and
μ _ = τ _ d + I n H ( q _ , θ _ ) q _ ¨ C ( q _ , q _ ˙ , θ _ ) A 2 q _ ˙ g ( q _ , θ _ ) A 1 q _ .
From (15), we see that the route cost calculation of model (1) is contained in this vector η _ . Based on the equivalent model (14), the proposed intelligent controller for the robot is as follows:
First, the estimation errors of the new unknown function vector η _ will be adjusted, as shown below.
u _ d k = ϑ _ μ _ ^
With the estimation of bias δ ¯ as (17) system (14) will become linear:
X ˙ ¯ = A X ¯ + B ϑ ¯ + δ ¯ , Y ¯ = q ¯ = I n , 0 n X ¯
with
X _ ˙ = q ¯ q ˙ ¯ , A = 0 n I n A 1 A 2 , B = 0 n I n , δ ¯ = μ _ μ _ ^
and 0 n , I n are n x n square matrices with zero.
Secondly, we control the compensation system in Equation (17) so that its output Y ¯ ( t ) will converge to a desired reference value R _ ( t ) .

5.2. Model-Free Disturbance Compensation for Internal Loop Control by Feedback Linearization

Going back to the pay provider ϑ _ k in Equation (16) and designating as the present with t = k T + τ ; 0 τ < T   a n d   ϑ ¯ k = ϑ ¯ k ( τ ) ; X ¯ k = X ¯ k ( τ ) then (17) is rewritten as:
X ˙ ¯ k ( τ ) = A X ¯ k ( τ ) + B ϑ _ k ( τ ) μ _ ^ k ( τ ) + μ ¯ k ( τ )
The automatic model-free feedback linearization block (Figure 2) aims to estimate η ^ ¯ based on the measured values of X ¯ k ( τ ) at two times τ = n t s and t t s = t s ( n 1 ) for each corresponding X ¯ k ( n t s ) ; X ¯ k ( n 1 ) t s value. With 0 < t s < < 1 being an arbitrarily chosen constant small value, we then expand the Taylor function of the function X ¯ k ( τ ) around ( n 1 ) t s , as follows:
X ¯ k ( n 1 ) t s = t s 2 2 X ¨ ¯ k ς t s X ˙ ¯ k ( n t s ) + X ¯ k ( n t s )
where ( n 1 ) t s ς n t s , o r
X ˙ ¯ k ( n t s ) X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s .
Assuming the last term of expression (20) is very small and can be ignored, the expression η ^ ¯ k ( n t s ) will be used to approximate Equation (19) by replacing the signs “ ” and η ¯ k ( n t s ) in (10) with “=” and η ^ ¯ k ( n t s )
X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s A X ¯ k ( n t s ) + B ϑ _ k ( n t s ) μ _ ^ k ( n 1 ) t s + μ ¯ k ( n t s )
Then, we calculate:
B η ^ ¯ k ( n t s ) = X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s A X ¯ k ( n t s ) B ϑ ¯ k ( n t s ) μ _ ^ k ( n 1 ) t s
Which yields:
μ _ ^ k ( n t s ) = B T X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s A X ¯ k ( n t s ) ϑ ¯ k ( n t s ) μ _ ^ k ( n 1 ) t s
Theorem 1. 
The value  η ^ ¯ k ( n t s )  calculated in Equation (23) causes the approximation error of expression (22) to decrease.
Proof. 
Error in expression (22) ε ¯ will be calculated:
σ _ = A X ¯ k ( n t s ) + B ϑ _ k ( n t s ) μ _ ^ k ( n 1 ) T s + η ¯ k ( n t s ) X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s = B μ ¯ k ( n t s ) + λ ¯
where
λ ¯ = A X ¯ k ( n t s ) + B ϑ ¯ k ( n t s ) μ _ ^ k ( n 1 ) t s X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s .
And the estimated value will also be calculated according to expression (22) as follows:
X ¯ k ( n t s ) X ¯ k ( n 1 ) t s t s = A X ¯ k ( n t s ) + B ϑ ¯ k ( n t s ) μ _ ^ k ( n 1 ) t s + μ _ k ( n t s )
The optimization problem would then be written:
μ * = arg min η ¯ k ε ¯ k 2 = arg min η ¯ k B μ _ k + γ ¯ 2 = arg min η ¯ k B μ _ k + γ ¯ T B μ _ k + γ ¯ = arg min η ¯ k μ _ k T μ _ k + 2 γ ¯ T B μ _ k + γ ¯ T γ ¯ ¯ T
This has a unique solution:
μ * = B T γ ¯
which coincides with μ _ ^ k ( n t s ) given in (23).
From (23), we see that the estimation tool created to compensate for noise and uncertainty in the vector model μ _ does not use the mathematical model (1). Therefore, the noise compensator (16) with μ _ ^ obtained from expression (23) will be model-free. □

5.3. Outer Loop Control Is by Iterative Learning Controller Design

Expression (17) describes the system (14) that becomes LTI by compensating the η ¯ synthetic disturbances in (15) by the compensator (16), and, for iterative learning systems, it is rewritten as follows in ILC language:
X ¯ k ( n + 1 ) = A ^ X ¯ k ( n ) + B ^ ϑ ¯ k ( i ) + δ ¯ k ( n ) Y ¯ k ( n ) = C ^ X ¯ k ( n )
where n = 0 , 1 , , N = T / t s , X ¯ k ( N ) = X ¯ k + 1 ( 0 ) , a n d
A ^ = exp ( A t s ) B ^ = 0 T s e A t B d t   C ^ = ( I n , O n )
The control objective at this point is to choose the proper learning parameter K for the update rule P-Type, assuming we have identified two matrices that make the matrix A provided in Equation (6).
ϑ ¯ k + 1 n = ϑ ¯ k n + K E _ k , w i t h   E _ k ( n ) = R ¯ k ( n ) + Y ¯ k ( n )
For the necessary convergence to be satisfied E _ k ( n ) 0 for all n, or at least as close as possible to the origin.
The value ϑ ¯ k given by ILC (26) is partially constant, so the discrete time model in Equation (24) obtained is equivalent to the continuous time model in Equation (17). In expression (24), no information about the system model (1) is used either. Therefore (24) can be applied to all robot controllers. From (24), it is obtained with the assumption δ ¯ k ( n ) = 0 ¯ , Y ¯ k + 1 ( n ) = C A ^ n X ¯ k + 1 ( 0 ) + j = 0 n 1 C A ^ n n 1 B ^ ϑ _ k + 1 ( j ) ,
This results from (24) and the clear demonstration of repeatable ability X ¯ k ( 0 ) = X k + 1 ( 0 ) , k
E _ k + 1 ( n ) = R ¯ ( n ) Y ¯ k + 1 ( n ) = R ¯ ( n ) C A ^ n X ¯ k ( 0 ) + j = 0 n 1 C A ^ n n 1 B ^ ( ϑ _ ( j ) + K E _ k ( j ) ) = R ¯ ( n ) Y ¯ k ( n ) j = 0 n 1 C A ^ n j 1 B ^ K E _ k ( j ) = ( I C B ^ K ) E _ k ( n ) j = 0 n 2 C A ^ n j 1 B ^ K E _ k ( j )
Hence,
E _ k + 1 = Φ E _ k
where
E _ k = E _ k ( 0 ) E _ k ( 1 ) E _ k ( N 1 )   a n d   Φ = I C B ^ K 0 0 C A ^ B ^ K I C B ^ K 0 C A ^ N 2 B ^ K C A ^ N 2 B ^ K I C B ^ K
Theorem 2. 
For the situation  δ ¯ k ( n ) = 0  , all requirements  E ¯ k ( j ) 0  for all  n = 0 , 1 , , N 1  will only be fulfilled if and when the P-Type learning parameter K is adjusted in a way that makes the value given in (16) Schur.
Proof. 
For the autonomous system (27) the proof is comprehensive as it is clear that it is stable if and only if it is Schur. □

5.4. The Closed-Loop System’s Performance and Control Algorithm

The following Algorithm 2 is established to implement the proposed model-free controller (16) where the ϑ ¯ and μ ¯ values are obtained from expressions (26) and (13), respectively. In this control algorithm, the working time T of the robots is repeatedly tested by each While loop. Figure 2 shows the closed-loop system’s output tracking performance with the output scenario δ ¯ k ( n ) 0 ¯ .
Theorem 3. 
The robot manipulators (1) output tracking error is driven by the proposed model-free control framework in Figure 1, which comprises the feedback linearization block via disturbance compensator (16), (23), and the ILC block (26) drives the robot manipulators (1) output tracking error  E ¯ k ( τ )  to  T s  dependent neighborhood  O  of origin if d is limited and continuous. If a smaller  t s  is selected, there will be a smaller  O .
Proof. 
Because noise τ _ d is continuous and bounded, the total noise μ ¯ and component δ ¯ are also continuous and bounded. The upper limit of O is Δ and this limit can be reduced arbitrarily by reducing t s according to Theorem 1.
  • The validity of Theorem 2 was confirmed by:
ω ˙ ¯ = A ω ¯ + B ϑ _   with   ω ¯ = v e c ( R ¯ , R ˙ ¯ )
The subtraction of (29) from (17) yields:
σ _ ˙ = A σ _ B δ ¯   w h e r e   σ _ = v e c ( E _ , E _ ˙ )
Using the Lyapaunov equation A T P + P A = Q with optional positive definite matrix Q we always have positive definite P because A is Hurwitz. From the positive definite function V ( σ _ ) = σ _ T P σ _ we have the result:
V ˙ = ( A σ _ B δ ¯ ) T + σ _ T P ( A σ _ B δ ¯ ) = σ _ T ( A T P + P A ) σ _ 2 σ _ T P B δ ¯ = σ _ T ( A T P P A ) σ _ 2 σ _ T P B δ ¯ = λ max ( A T P P A ) σ _ 2 + 2 P B Δ ε ¯ = ε ¯ λ max ( A T P P A ) σ _ + 2 P B Δ
with λ max ( A T P P A ) is the maximum value of the Q matrix.
Since V ˙ ( ε ¯ ) < 0 as long as
2 P B Δ < λ max ( A T P P A ) ε ¯   or   2 P B Δ λ max ( A T P P A ) < ε ¯
The orbital error vector tends to the origin:
O = σ _ R 2 n   σ _   2 P B Δ λ max ( A T P P A )
This control algorithm contains all the necessary calculations and has the following structure:
Algorithm 2: The structure of the second iterative learning controller with model-free determination of optimal learning parameters for an industrial robot
1Choose two matrices A 1 , A 2 , given in (25) become Hurwitz.
Determine A ^ , B ^ , C ^ given in (25) and Φ given in (28).
Choose 0 < t s < < 1 . Calculate S = T / t s .
Determine D 1 T = 1 T s ; D 0 T = D 1 T given in (23)
Choose learning μ _ ^ and tracking error E _ 0 .
Allocate ϑ ¯ ( n ) = R ¯ ( n ) ,   with   n = 0 , 1 , , S 1   and   Z ¯ = 0 ¯ . Choose learning parameter K so that Ф of (28) becomes Schur.
2while continue the control do
3for   n = 0 , 1 , , S 1  do
4   Send u _ d k = A 1 q _ A 2 q _ ˙ μ _ ^ to robot for a while of t s .
   measure X ¯ = v e c ( E _ , E _ ˙ ) , Y ¯ ( n ) =   q ¯ .
5   calculate μ _ ^ k B T X ¯ Z ¯ t s A X ¯ ( ϑ ¯ ( n ) μ _ ^ ) .
    Set   Z ¯ X _
6end for
7assemble ϑ _ = v e c (   ϑ _ ( 0 ) , ,   ϑ _ ( N 1 ) ) ,   E _ = v e c ( E _ ( 0 ) , , E _ ( N 1 ) ) .
8calculate K = arg min a K b I Φ K E _ and ϑ ¯ ϑ ¯ + K E _ ; E _ 0 E _ ;
9end while
Thus, each while—do loop represents one working cycle (one attempt) in this algorithm. In addition, it can be seen that the above control algorithm does not use model (1) of the robot at all, so it is also an intelligent algorithm. □

5.5. Applied to Robot Control

With
t s = 0.02 s   and   τ _ d t = θ 11 sin θ 12 t θ 13 sin θ 14 t
the parameters θ n , n = 1 ÷ 14 are random. The robot is assumed to have a period cycle: T = 10 s . A robot with two degrees of freedom has the following parameters (Figure 1): l 1 = 0.45 , l 2 = 0.65 , g = 9.81 .
Assigned:
R _ 1 ( t ) = min t s t / 2 ,   1.5   k h i   t T 2 R _ 1 ( T t ) k h i   T 2 < t T R _ 2 ( t ) = sin ( 2 π t T )
A 1 = 31 0 0 12 ; A 2 = 12 0 0 8 ; K = 0.8 0 0 0.3 ; t s = 0.02 s
Simulation results to verify the above algorithm are shown in Figure 8 and Figure 9.
Comment. 
These visual tracking results demonstrate the convergence of both joint variables to the required references. Over the course of the working day, the highest tracking error value is around 250 max E _ 250 ( 1 ) 0.03 and max E _ 250 ( 2 ) 0.02 . Furthermore, they confirmed that the tracking error will decrease with the number of trials conducted. Therefore, these simulation findings fully validate all of the above-mentioned theoretical claims.

6. The Structure of the Second Iterative Learning Controller with Model-Free Determination of Online Learning Parameters for an Industrial Robot

The structure of two control loop circuits is still used, as shown in Figure 3.

6.1. Control the Inner Loop

Next is the inner loop. This loop uses Taylor series analysis to estimate η ^ ¯ at time τ = k T + n t s , the previously measured value of q _ k ( n ) , q _ ˙ k ( n ) , with j = n ; n 1 . This estimation formula is given in (35).
The remarkable thing is that while the iterative learning controller works in a cyclic cycle T, the inner loop controller is not cyclic. In other words, the internal loop controller also estimates the non-periodic functional uncertainty component. It can be seen that neither control loop uses the mathematical model (1) of the robot at all, and that is their intelligent feature.

6.2. Outer Loop Control Is by Iterative Learning Controller Design

This is a loop circuit that uses cybernetics P-style iteration with a diagonal learning function parameter matrix:
K k = d i a g K k ( n ) , n = 1 , 2 , , n
The number of elements equals the number of joint variables q _ R n (output) and the input variable u _ d k because robot (1) is a sufficient actuator. Each K k ( j ) element is determined according to:
K = a _ k . arg min 0 Z 1 Z _ E _ k 1 z E _ k 1
or
K = a _ k . arg min 0 Z 1 E _ k 1 Z _ + Z _ E _ k 1
This loop generates the control signal
u _ d k = A 1 q _ A 2 q _ ˙ μ ^ _
where we choose two matrices A 1 , A 1 in (6), which become Hurwitz, and a sufficiently small constant 0 < t s < < 1 , ϑ _ R n is obtained from the iterative learning controller ϑ _ ( t ) = ϑ _ k ( τ ) piecewise constant form, i.e.,
ϑ _ k ( τ ) = ϑ _ k ( n ) when n T < τ < < n + 1 t s with τ < < n t s , n = 0 , 1 , , N 1
and ϑ _ k + 1 ( τ ) = ϑ _ k + K k E _ k
with ϑ _ = E _ = v e c ( E _ ( 0 ) , , E _ ( N 1 ) )   a n d   v e c ( ϑ _ ( 0 ) , , ϑ _ ( N 1 ) ) .
The remaining component m in (38) is the estimated value of μ ^ _ given in (39), including input uncertainty τ _ d and model bias, i.e.,
μ _ = τ _ d + I n H ( q _ , θ _ ) q _ ¨ C ( q _ , q _ ˙ , θ _ ) q _ ˙ g ( q _ , θ _ ) F ( θ _ ) q _ ˙
at time τ = k T + n t s . The inner loop controller determines it.
To facilitate installation, the following Algorithm 3 summarizes the calculations for implementing the controller (16) proposed above.
Algorithm 3: The structure of the second iterative learning controller with model-free determination of online learning parameters for an industrial robot.
1Choose two matrices A 1 , A 2 , in (6), which become Hurwitz and a sufficiently small constant 0 < t s < < 1 . Calculate N = T / t s .
Determine D 1 T = 1 T s ; D 0 T = D 1 T ;
Choose learning μ ^ _ and tracking error E _ 0 .
Assign the robot’s initial state and initial output to the outer loop controller (iterative learning controller) ϑ ¯ ( n ) = R ¯ ( n ) ,   n = 0 , 1 , , S 1   and   Z _ = 0 _
2while keeping the controls in place
3for  n = 0 , 1 , , S 1  do
4    Forward u _ d k = A 1 q _ A 2 q _ ˙ η _ ^ to robot for a while of t s .
    Measure Y ¯ ( n ) =   q ¯ and X ¯ = v e c ( E _ , E _ ˙ ) .
    Assess E _ k n = R ¯ n Y ¯ n .
5    Calculate μ _ ^ k ( Θ , I ) z ¯ D 0 T + X ¯ D 1 T A X ¯ ( ϑ ¯ ( i ) μ _ ^ ) .
     Set   z ¯ X ¯
6end for
7Assemble ϑ ¯ = v e c ( ϑ ¯ ( 0 ) , , ϑ ¯ ( S 1 ) )   a n d   E _ = v e c ( E _ ( 0 ) , , E _ ( S 1 ) ) .
8Calculate K = a _ k . arg min 0 Z 1 Z E _ k 1 Z E _ k 1 or K = a _ k . arg min 0 Z 1 E ¯ k 1 Z + Z E _ k 1
      and ϑ _ ϑ _ + K E ¯ ; E ¯ 0 E ¯ ;
9end while

6.3. Applied to Robot Control

Using the following information, assess and confirm the effectiveness of the control algorithm for a two-DOF planar robot:
Friction components:
F ( q _ , q ˙ _ , θ _ ) q ˙ _ = θ 11 f 1 θ 21 f 2 ; f 1 = θ 11 q ˙ 1 + θ 12 q 1 t a n h q ˙ 1 ; f 2 = θ 21 q ˙ 2 + θ 22 q 2 t a n h q ˙ 2 .
With input noise assumed as follows:
τ _ d ( t ) = θ 31 sin ( 2.3 π t ) θ 32 sin ( 0.85 π t )
Instead of using the controller design, the parameters (20) and disturbances (21) will be employed to simulate the robot dynamics as in model (1). The parameter θ i , i = 1 ÷ 14 is chosen randomly.
A robot with two degrees of freedom has the following parameters:
l 1 = 0.45 ; l 2 = 0.65 ; g = 9.81
The robot is assumed to have a duty cycle T = 10 s . The time to update the control signal (and also to measure the output) is chosen as t s = 0.02 s . The signal given to the two channels is two periodic functions of period T as follows:
R _ 1 ( t ) = R _ 1 ( T t ) k h i   T 2 ;   R _ 2 ( t ) = sin ( 2 π t T ) ; A 1 = 31 0 0 12 , A 2 = 12 0 0 8
We obtain the simulation results in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17.
Comment. 
Figure 10 and Figure 11 are the results of two output channels, respectively, after 20 and 300 trials when the controller learns iteratively using the Formulas (36) and (37). For “smart” determination, the learning function parameter is K k = d i a g K k ( j ) , j = 1 , 2 , , n , the input/output signal pair index u j q j . This result shows that the controller has provided the required tracking quality, and after 300 trials, the output signal has tracked the password set in both channels.
The quality of tracking the set signal remains the same when Formula (40) is used in place of Formula (39) to compute the intelligent learning parameter in the iterative learning controller. The difference now is only the appropriate change. The results of the two learning parameters K k 1 , K k 2 are shown in Figure 14, Figure 15, Figure 16 and Figure 17.

7. Conclusions

We propose a two-loop control structure for a robot, in which the inner loop is an uncertain function component estimator to compensate for the robot’s input, and the outer circle is an iterative learning controller (Figure 2). The Taylor series analysis method was used to create an inner loop controller. This method does not necessitate the creation of a mathematical model of the robot. For estimating, it simply takes measured signal values from previous systems. We developed two options for installing an iterative learning controller in the outer loop. The difference between these two options lies in the online determination of learning function parameters. The two proposed control circuits were installed into the control algorithm and simulated with the planar robot to evaluate the quality. The correctness of the above-mentioned solutions was demonstrated by theory and validated by simulation. We created a control algorithm based on the concepts above for controlling two items that work in batch processes in industry robots. These control algorithms are all distinguished because they do not (or only very infrequently) employ a mathematical model of the control object. The quality of these control algorithms has also been verified through a 2-DOF robot simulation. However, there are still disadvantages when building methods to determine convergence parameter sets for nonlinear learning functions with known structures. The problem of constrained control by iterative learning still needs to be solved. These will be the following research directions.

Author Contributions

Conceptualization, V.T.H.; Methodology, V.T.H.; Software, V.T.H.; Validation, V.T.H.; Formal analysis, V.T.H., T.T.T. and V.Q.V.; Investigation, V.Q.V.; Resources, V.Q.V.; Data curation, V.Q.V.; Writing—original draft, V.T.H.; Writing—review & editing, V.T.H., V.T.H., T.T.T. and V.Q.V.; Visualization, V.T.H.; Supervision, V.T.H.; Project administration, V.T.H.; Funding acquisition, V.T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lewis, L.; Dawson, D.M.; Abdallah, C.T. Robot Manipulator Control Theory and Practice; Marcel Dekker: New York, NY, USA, 2004. [Google Scholar]
  2. Spong, W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; Wiley: New York, NY, USA, 2006. [Google Scholar]
  3. Slotine, J.-J.; Li, W. On the Adaptive Control of Robot Manipulators. Int. J. Robot. Res. 1987, 6, 49–59. [Google Scholar] [CrossRef]
  4. Bascetta, L.; Rocco, P. Revising the Robust-Control Design for Rigid Robot Manipulators. IEEE Trans. Robot. 2010, 26, 180–187. [Google Scholar] [CrossRef]
  5. Zhao, J.; Jiang, S.; Xie, F.; Wang, X.; Li, Z. Adaptive dynamic sliding mode control for space manipulator with external disturbance. J. Control. Decis. 2019, 6, 236–251. [Google Scholar] [CrossRef]
  6. Goel, A.; Swarup, A. MIMO uncertain nonlinear system control via adaptive hight-order super twisting sliding mode and its application to robotic manipulator. J. Control Autom. Electr. Syst. 2017, 28, 36–49. [Google Scholar] [CrossRef]
  7. Gurney, K. An Introduction to Neural Networks; Taylor & Francis: Philadelphia, PA, USA, 2004. [Google Scholar]
  8. Wang, Y.; Gao, F.; Doyle, F.J., III. Servy on iterative leaning control, repetitive control and run-to-run control. J. Process Control. 2009, 19, 1589–1600. [Google Scholar] [CrossRef]
  9. Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A Survey of Iterative Learning Control: A learning based method for high-performance tracking control. IEEE Control. Syst. Mag. 2006, 26, 96–114. [Google Scholar]
  10. Ahn, H.-S.; Chen, Y.; Moore, K.L. Iterative Learning Control: Brief Survey and Categorization. IEEE Trans. Syst. Man Cybern. 2007, 37, 1099–1121. [Google Scholar] [CrossRef]
  11. Norrloef, M. Iterative Learning Control: Analysis, Design and Experiment. Ph.D. Thesis, No. 653. Linkoepings University, Linköping, Sweden, 2000. [Google Scholar]
  12. Lee, R.; Sun, L.; Wang, Z.; Tomizuka, M. Adaptive Iterative learning control of robot manipulators for friction compensation. IFAC PapersOnline 2019, 52, 175–180. [Google Scholar] [CrossRef]
  13. Boiakrif, F.; Boukhetala, D.; Boudjema, F. Velocity observer-based iterative learning control for robot manipulators. Int. J. Syst. Sci. 2013, 44, 214–222. [Google Scholar] [CrossRef]
  14. Nguyen, P.D.; Nguyen, N.H. An intelligent parameter determination approach in iterative learning control. Eur. J. Control. 2021, 61, 91–100. [Google Scholar] [CrossRef]
  15. Jeyasenthil, R.; Choi, S.-B. A robust controller for multivariable model matching system utilizing a quantitative feedback theory: Application to magnetic levitation. Appl. Sci. 2019, 9, 1753. [Google Scholar] [CrossRef]
  16. Nguyen, P.D.; Nguyen, N.H. Adaptive control for nonlinear non-autonomous systems with unknown input disturbance. Int. J. Control. 2022, 95, 3416–3426. [Google Scholar] [CrossRef]
  17. Tran, K.G.; Nguyen, N.H.; Nguyen, P.D. Observer-based controllers for two-wheeled inverted robots with unknown input disturbance and model uncertainty. J. Control. Sci. Eng. 2020, 2020, 7205737. [Google Scholar] [CrossRef]
  18. Fortuna, L.; Buscarino, A. Microrobots in Micromachines. Micromachines 2022, 13, 1207. [Google Scholar] [CrossRef] [PubMed]
  19. Bucolo, M.; Buscarino, A.; Fortuna, L.; Frasca, M. Forward Action to Stabilize multiple Time-Delays MIMO Systems. Int. J. Dyn. Control 2023. [Google Scholar] [CrossRef]
  20. Bucolo, M.; Buscarino, A.; Famoso, C.; Fortuna, L.; Gagliano, S. Imperfections in Integrated Devices Allow the Emergence of Unexpected Strange Attractors in Electronic Circuits. IEEE Access 2021, 9, 29573–29583. [Google Scholar] [CrossRef]
  21. Do, D.M.; Hoang, D.; Nguyen, N.H.; Nguyen, P.D. Data-Driven Output Regulation of Uncertain 6 DOF AUV via Lagrange Interpolation. In Proceedings of the 2022 11th International Conference on Control, Automation and Information Sciences, Hanoi, Vietnam, 21–24 November 2022. [Google Scholar]
  22. Hoang, D.; Do, D.M.; Nguyen, N.H.; Nguyen, P.D. A Model-Free Approach for Output Regulation of uncertain 4 DOF Serial Robot with Disturbance. In Proceedings of the 2022 11th International Conference on Control, Automation and Information Sciences, Hanoi, Vietnam, 21–24 November 2022. [Google Scholar]
  23. Nguyen, P.D.; Nguyen, N.H. A simple approach to estimate unmatched disturbances for nonlinear nonautonomous systems. Int. J. Robust Nonlinear Control 2022, 32, 9160–9173. [Google Scholar] [CrossRef]
Figure 1. A 2-DOF planar robot arm.
Figure 1. A 2-DOF planar robot arm.
Applsci 14 01805 g001
Figure 2. Structure of the first diagram for robot control using the iterative learning method.
Figure 2. Structure of the first diagram for robot control using the iterative learning method.
Applsci 14 01805 g002
Figure 3. Structure of the second diagram for robot control using the iterative learning method.
Figure 3. Structure of the second diagram for robot control using the iterative learning method.
Applsci 14 01805 g003
Figure 4. Results of the output tracking following 20 repetitions of the first joint.
Figure 4. Results of the output tracking following 20 repetitions of the first joint.
Applsci 14 01805 g004
Figure 5. Results of the output tracking following 20 repetitions of the second joint.
Figure 5. Results of the output tracking following 20 repetitions of the second joint.
Applsci 14 01805 g005
Figure 6. Results of the output tracking following 60 repetitions of the first joint.
Figure 6. Results of the output tracking following 60 repetitions of the first joint.
Applsci 14 01805 g006
Figure 7. Results of the output tracking following 60 repetitions of the second joint.
Figure 7. Results of the output tracking following 60 repetitions of the second joint.
Applsci 14 01805 g007
Figure 8. The output response of the first common variable after trying 15 and 250 times.
Figure 8. The output response of the first common variable after trying 15 and 250 times.
Applsci 14 01805 g008
Figure 9. The output response of the second common variable after trying 15 and 250 times.
Figure 9. The output response of the second common variable after trying 15 and 250 times.
Applsci 14 01805 g009
Figure 10. Meets the position of the first joint when used (36).
Figure 10. Meets the position of the first joint when used (36).
Applsci 14 01805 g010
Figure 11. Meets the position of the second joint when used (36).
Figure 11. Meets the position of the second joint when used (36).
Applsci 14 01805 g011
Figure 12. Meets the position of the first joint when used (37).
Figure 12. Meets the position of the first joint when used (37).
Applsci 14 01805 g012
Figure 13. Meets the position of the second joint when used (37).
Figure 13. Meets the position of the second joint when used (37).
Applsci 14 01805 g013
Figure 14. Change in the first learning function parameter when using (36).
Figure 14. Change in the first learning function parameter when using (36).
Applsci 14 01805 g014
Figure 15. Change in the second learning function parameter when using (36).
Figure 15. Change in the second learning function parameter when using (36).
Applsci 14 01805 g015
Figure 16. Change in the first learning function parameter when using (37).
Figure 16. Change in the first learning function parameter when using (37).
Applsci 14 01805 g016
Figure 17. Change in the second learning function parameter when using (37).
Figure 17. Change in the second learning function parameter when using (37).
Applsci 14 01805 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ha, V.T.; Thuong, T.T.; Vinh, V.Q. Improving the Quality of Industrial Robot Control Using an Iterative Learning Method with Online Optimal Learning and Intelligent Online Learning Function Parameters. Appl. Sci. 2024, 14, 1805. https://doi.org/10.3390/app14051805

AMA Style

Ha VT, Thuong TT, Vinh VQ. Improving the Quality of Industrial Robot Control Using an Iterative Learning Method with Online Optimal Learning and Intelligent Online Learning Function Parameters. Applied Sciences. 2024; 14(5):1805. https://doi.org/10.3390/app14051805

Chicago/Turabian Style

Ha, Vo Thu, Than Thi Thuong, and Vo Quang Vinh. 2024. "Improving the Quality of Industrial Robot Control Using an Iterative Learning Method with Online Optimal Learning and Intelligent Online Learning Function Parameters" Applied Sciences 14, no. 5: 1805. https://doi.org/10.3390/app14051805

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop