Next Article in Journal
VLP Landmark and SLAM-Assisted Automatic Map Calibration for Robot Navigation with Semantic Information
Previous Article in Journal
Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Accuracy of a Robot by Using Neural Networks (Neural Compensators and Nonlinear Dynamics)

1
Institute of Computer Science and Technology, Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya 29, St. Petersburg 195251, Russia
2
Information Construction Management Office, MinZu University of China, No. 27 South Street, Zhongguancun, Haidian District, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Robotics 2022, 11(4), 83; https://doi.org/10.3390/robotics11040083
Submission received: 16 July 2022 / Revised: 15 August 2022 / Accepted: 17 August 2022 / Published: 19 August 2022
(This article belongs to the Section Industrial Robots and Automation)

Abstract

:
The subject of this paper is a programmable con trol system for a robotic manipulator. Considering the complex nonlinear dynamics involved in practical applications of systems and robotic arms, the traditional control method is here replaced by the designed Elma and adaptive radial basis function neural network—thereby improving the system stability and response rate. Related controllers and compensators were developed and trained using MATLAB-related software. The training results of the two neural network controllers for the robot programming trajectories are presented and the dynamic errors of the different types of neural network controllers and two control methods are analyzed.

1. Introduction

In the early control design period of the manipulator, the dynamic model of the system and related system parameters need to be accurately described when designing the controller [1]. In traditional control design methods, such as computational torque control and inverse dynamics control, which works fine [2], by calculating the torque of the robot arm and the construction of your dynamic equation, you can get a good control effect [3]. However, this is under the premise of being able to get an accurate data model. However, it is difficult to obtain an accurate mathematical model of a robot during its actual production and use [4]. Furthermore, due to the effects of different payloads, it may be difficult to obtain appropriate model-based methods. Recently, neural network calculators have been used to improve the characteristics of robotic manipulator control systems in the development of robotic manipulator control systems. In numerical control (CNC) systems, a neural network interpolator of robot link trajectories can be used to replace the traditional spline interpolator [5].
This research is used to train compensators using neural networks in numerical control systems of robotic manipulators and in the absence of precise initial data [6]. The adaptive neural network compensator is used to replace the traditional PID controller and other methods for compensating the dynamic error caused by the torsional load in the robot link drive [7]. The nonlinear dynamic coupling of the drive selects the two-strait robot manipulator in the angular coordinate system as the simulated control object [8].
The purpose of this work is to synthesize and train a multi-dimensional neural network controller to compensate and correct the dynamic error of the robot trajectory. Both the neural network controller and simulation of the project were performed in MATLAB(R2018b, The MathWorks, Inc. Protected by U.S and international patents) [9].
There are mainly two types of neural networks trained in this paper: an Elman neural network and an RBF adaptive neural network.
An Elman neural network is a typical local regression network (global feed forward local recurrent). An Elman network can be viewed as a recurrent neural network with local memory units and local feedback connections [10]. Its main structure is a feedforward connection, including an input layer, hidden layer, and output layer, and its connection weight can be modified by learning; the feedback connection is composed of a group of “structural” units that are used to memorize the output value of the previous moment. The connection weights are fixed. In this kind of network, in addition to the ordinary hidden layer, there is a special hidden layer called the association layer (or connection unit layer) [11]; this layer receives feedback signals from the hidden layer and each hidden layer node—each has a corresponding association layer node connection. The role of the association layer is to use the state of the hidden layer at the previous moment together with the network input at the current moment as the input of the hidden layer through the connection memory, which is equivalent to state feedback [12]. It is a dynamic feedback network, which can internally feedback, store, and utilize the output information of the previous moment. It can not only realize the modeling of the static system, but also realize the mapping of the dynamic system and directly reflect the dynamic characteristics of the system. It is better than a BP neural network in terms of its performance and stability [13].
The structure of the RBF network is similar to the multi-layer forward network, which is a three-layer forward network. The input layer is composed of signal source nodes; the second layer is the hidden layer. The number of hidden units depends on the needs of the described problem. The transformation function of the hidden unit is the RBF radial basis function, which is radially symmetric to the center point. It has a decaying non-negative nonlinear function; the third layer is the output layer, which responds to the effects of the input mode [14]. The transformation from the input space to the hidden layer space is nonlinear, while the transformation from the hidden layer space to the output layer space is linear. This also makes the RBF neural network have a very fast system response speed, which is also its biggest feature [15].
In this paper, by constructing the dynamic equation of the manipulator robotic, we tested the two constructed neural networks in the same state and obtained the following results: 1. The two constructed neural networks reached the theoretical position in the case of unknown perturbations and incomplete dynamic model data and demonstrated the high efficiency of the control. 2. Through the results of the external disturbance compensation, it was also found that although the response speed of the RBF adaptive neural network was faster, its accuracy was lower than that of the Elman neural network. However, the RBF adaptive neural network performed better in local approximations. 3. For the designed RBF adaptive neural network, the control algorithm was directly designed in the workspace, which improved the system response time and was able to ensure that the adaptive control was always stable.

2. Materials and Methods

In this paper, the controlled object was selected as the n-joint manipulator, and its dynamic equation was:
M q q ¨ + C q , q ˙ q ˙ + G q = τ τ d
where M q is the   n × n positive definite inertia matrix of the order; C q , q ˙ is for the   n × n order centrifugation and Gohrenheit force terms; G q is an   n × 1 order gravity term; the q vectors are the joint variables; τ is the moment acting on the joint; and τ d is the disturbance for the outside world. In practical engineering, M q ,   C q , q ˙ , and   G q are often unknown factors and can be expressed as follows:
M q = M 0 q + E M C q , q ˙ = C 0 q , q ˙ + E C G q = G 0 q + E G
where E M ,   E C , and E G are modeled errors for   M q ,   C q , q ˙ , and G q , respectively. The controller design defines the tracking error as:
e t = q d t q t
where q d t is the ideal tracking instructions and q t is the actual location. The sliding mode function is defined as:
r = e ˙ + Λ e
where Λ > 0 . q ˙ r = r t + q ˙ t is defined, followed by   q ¨ r = r ˙ t + q ¨ t ,   q ˙ r = q ˙ d + Λ e and   q ¨ r = q ¨ d + Λ e ˙ by Equation (1), as follows:
τ = M q q ¨ + C q , q ˙ q ˙ + G q + τ d = M q q ¨ r r ˙ + C q , q ˙ q ˙ r r + G q + τ d = M q q ¨ r + C q , q ˙ q ˙ r + G q M q r ˙ C q , q ˙ r + τ d = M 0 q q ¨ r + C 0 q , q ˙ q ˙ r + G 0 q + E M q r ˙ C q , q ˙ r + τ d
Among these E = E M q ¨ r + E C q ˙ r + E G   for the above system and the design controller is as follows:
τ = τ m + K p r + K i   r d t + τ r
where K p > 0 ; K i > 0 ; τ m for the control term based on the nominal model; τ r is for robust terms and
τ m = M 0 q q ¨ r + C 0 q , q ˙ q ˙ r + G 0 q τ r = K r sgn r
where K r = diag k rii ; k rii E i , i = 1 , , n ; E = E + τ d is the coefficient matrix of the linear compensator. The purpose of this project was to design a controller through the Elman and RBF adaptive neural networks. Through simulation and training, the unknown external disturbance can be compensated more quickly, so as to obtain higher stability and shorten the system response time.
By synthesizing Equations (5)–(7), we can obtain:
M 0 q q ¨ r + C 0 q , q ˙ q ˙ r + G 0 q M q r ˙ C q , q ˙ r + E + τ r = M 0 q q ¨ r + C 0 q , q ˙ q ˙ r + G 0 q + K p r + K i 0 t r d t + K r sgn r
where
M q r ˙ + C q , q ˙ r + K i 0 t r d t = K p r K r sgn r + E
Under this model, the traditional controller can only work under the unknown model and uncertainty, which causes the control input to chatter. To solve this problem, two kinds of neural networks were used to approximate the model and compensate for external disturbances.

3. Training of Nonlinear Neural Network Compensators

3.1. Designing Compensators with Elman Neural Networks

This part of the article is for the development of a model of an Elman neural control scheme for nonlinear dynamic systems in a manipulative robot.
As an example, consider an industrial n-link robot manipulator, the links of which are interconnected by rotational motion drives. The position of the links is determined by the angles ϕ1, ϕ2, ϕ2… In addition, weight forces act on the robot links, which are directed at a certain angle α to the selected coordinate system, which demonstrates the ability of the device to work at any angle to the horizon [16].
The object of the control is the   n joint robotic arm, whose kinetic equation is:
M q q ¨ + C q , q ˙ q ˙ + G q + F q ˙ + τ d = τ
Through the dynamic analysis of the manipulator in the second part and Equations (1)–(4), the following equations can be obtained:
M r ˙ = M q ¨ d q ¨ + Λ e ˙ = M q ¨ d + e ˙ M q ¨ = M q ¨ d + Λ e ˙ + C q ˙ + G + F + τ d τ = M q ¨ d + Λ e ˙ C r + C q ˙ d + Λ e + G + F + τ d τ = C r τ + f + τ d
where f x = M q ¨ r + C r + G + F ; q ˙ r = q ˙ d + Λ e . As   f x , you can see from the expression f x that it contains all the model information—that is, in Equation (7), all the model information can be represented by   f x . The control goal was to use the neural network approximation   f x to design a robust controller that does not require model information. The approximation used was Elman network approximation   f x and the network algorithm was:
h j = x j k 1 ,   j = 1 , 2 , , m f x = W 1 h k + W 2 u k 1 + ε
where x is the input of the Elman neural network; W is the ideal weights for the network;
W1 connect the weights from the input layer to the middle layer; W2 connects the weights from the output layer to the middle layer h = h 1 h 2 h m T ;   ε is a very small, positive real number. Approximation was carried out using Elman neural networks   f x ; j is the number of neurons in the hidden layer of the neural network.
The advantage of the Elman neural network is increased stability, since it has feedbacks from the outputs of internal neurons to the intermediate layer, which makes it more stable compared to a recurrent network of a similar type (for example, the Hopfield neural network, in which internal feedbacks are brought to the primary inputs, where signals are mixed). In addition, the Elman neural network allows you to take into account the background of the observed processes and accumulate information to choose the right robot control strategy [11].
f ˆ x = W ˆ 1 h k + W ˆ 2 u k 1
where W ˜ = W W ˆ ; W F W max . Combining the above equation and Formula (12), we can get:
f f ˆ = W 1 h k 1 + W 2 k + ε W ˆ 1 h k + W ˆ 2 u ( k 1 ) = W ˜ 1 h k 1 + W ˜ 2 k + ε
By Equation (13), referencing the   11 design methods, the design control laws are:
τ = f ˆ x + K v r v
where v = ε N + b d sgn r for the robust term; f ˆ x is an approximation of f x . ELMAN The network weight adaptive law is:
W ˆ ˙ = Γ h r T
where Γ = Γ T > 0 . The general Equation (13) can be substituted into (10), a collation of the Elman neural network adaptive control MATLAB simulation:
M r ˙ = C r f ˆ x + K v r v + f + τ d = K v + C r + W ˜ 1 h k + W ˜ 2 u ( k 1 ) + ε + ε + τ d + v = K v + C r + ζ l
where   ζ 1 = W ˜ 1 T k φ + W 2 u k ˜ 1 + ε + τ d + v   ; with reference by [7], the analysis of the closed-loop system is as follows: the Lyapunov function is defined as:
L = 1 2 r T M r + 1 2 tr W ˜ 1 k 1 T Γ 1 W 1 k ˜ + 1 2 tr   u W 2 k ˜ 1
Derivating Equation (17) to get:
L ˙ = r T M r ˙ + 1 2 r T M ˙ r + tr W ˆ ˙ 1 k T ˜ Γ 1 W 1 ¯ ˙ + tru W ˆ ˙ 2 k 1 T ˜ Γ 1 W 2 ¯ ˙
By substituting Equation (15) into the above equation, the following Equation can be defined:
      L ˙ = r T K v r + 1 2 r T M ˙ 2 C r + tr W 1 k ˜ T Γ 1 W 1 ˙ + h r T + trW 2 k 1 Γ 1 W ˙ 2 + r T ε + τ d + v
According to the following conditions:
(1)
oblique symmetry characteristics of the manipulator   r T M ˙ 2 C r = 0 ;
(2)
r T W ˜ T h = tr W T h r T ;
(3)
W ˙ = W ˆ ˙ = Γ h r T .
where
L ˙ = r T K v r + r T ε + τ d + v
consider
r T ε + τ d + v = r T ε + τ d + r T ε N + b d sgn r = r T ε + τ d r ε N + b d 0
then the following definition can be obtained
L ˙ r T K v r 0
The neural network learning algorithm consists of the following steps:
  • At the initial moment of time t = 0, all neurons of the hidden layer are set to the zero position—the initial value is zero.
  • The input value is fed to the network, where it is directly distributed.
  • Set t = t + 1 and make the transition to step 2; neural network training is performed until the total root mean-square error of the network takes the smallest value.

3.2. Designing a Compensator with an Adaptive Radial Basis Function Neural Network for Local Model Approximation

In this section, a nonlinear dynamic model compensator is designed based on an Adaptive Radial basis function (RBF) neural network, based on the literature [8,11], which is then compared with the neural network compensator designed in the previous section.
The dynamic equation of the manipulator has the following properties:
Property 1: The inertia matrix   M x q is symmetric positive definite;
Property 2: If C x q , q ˙ is defined by Christoffel’s notation rule, the matrix M ˙ x q 2 C x q , q ˙ is skew symmetric [17].
Since M x q and G x q are just functions of q   , they can be modeled using static neural networks [16,18,19,20].
m x k j q = l θ k i j ξ k i j q + ε m k j q = θ k j T ξ k j q + ε m k j q g x k q = l β k l η k l q + ε k k q = β k T η k q + ε k k q
Among these, θ k j l , β k l R are the weights of the neural network; ξ k j l q , η k l q R is the radial basis function whose input is a vector; q , ε m k j q , ε k k q R are the modeling errors of m x k j q and g x k q , respectively, and are assumed to be bounded.
For C q , q ˙ , modeling with a dynamic neural network with inputs q and q , the neural network model of C x k j q , q ˙ is:
C x k j q , q ˙ = l α k i l ξ k i l z + ε i k j z = α k j T ξ k j z + ε i k j z
Among these, z = q T q ˙ T T R 2 n ; α k j l R is the weight; ξ k j l z R is the radial basis function of the input vector z ; and ε c k j z is the modeling error of the element c x k j q , q ˙ , assuming it is also bounded [20,21].
Using neural network modeling, the dynamic equation of the manipulator in space can be written as:
M x q x ¨ + C x q , q ˙ x ˙ + G x q = F x
Using GL (general linear) matrices and their multiplication operations, M x q can be written as:
M x q = { θ } · Ξ q + E M q
where {θ} and {Ξ(q)} are GL (general linear) matrices whose elements are θ k j T and ξ k j q ; E_M (q)∈R^(n × n) is a matrix whose elements are modeling errors ε m k j q .
Similarly, for C q , q ˙ and G x q :
C x q , q   ˙ = A T · Z z + E C z         G _ x   q = B ^ T · H q + E _ G   q )        
where A , Z z ,   B , and H q are GL matrices and GL vectors whose elements are α k j ,   ξ k j z ,   β k and η k q ; E C z R n × n and E G q R n are the elements and matrices of the modeling errors ε c k j z and ε k k q , respectively.
Assuming x d t is the ideal trajectory of the workspace, then   x d ˙ t and x d ¨ t are the ideal velocity and ideal acceleration:
x ˙ r t = x ˙ d t + Λ e t r t = x ˙ r t x ˙ t = e ˙ t + Λ e t
where Λ   is a positive definite matrix.
Lemma 1 (Barbalat’s Lemma).
If the function  h : R R  is a uniform continuous function defined by  [ 0 , + ), and lim t 0 t h ( δ ) d exists and is finite, then lim t h ( t ) = 0 is obtained [22].
Lemma 2.
Let  e t = h t * r t , where * represents convolution, h t = L 1 H s andH(s) is a strictly exponentially stable transfer function of order n * n. If  r L 2 n , then e L 2 n L n , e ˙ L 2 n , e  is continuous, e 0 , r 0 , e ˙ 0  when  t .
Consider a second-order SISO system and let r t = c e t + e ˙ t , then we can obtain r s = e s c + s , e s =   1 s + c r s , and H s = 1 s + c . To ensure that H(s) is a strictly exponentially stable transfer function, it can be defined that c > 0. If the above conditions are satisfied, then r t = 0 , ce(t) + e(t) = 0 can be obtained, thus ensuring that the system is exponentially closed [17].
Using σ ˆ to represent the estimated value of · , define d · = · σ ˆ , then θ ˆ , A ˆ , and B ˆ represents the θ , A , and B estimates.
Designing the controller this can be defined as:
F x = { θ ˆ } T · θ q x ¨ r + { A ˆ } T · Z z x ˙ r + { B ˆ } T · H q + K r + k s sgn r
where K R n × n > 0 ; k s > E ; E = E M q x ¨ r + E C z x ˙ r + E G q . The first three terms of the controller are model-based controls, the K_r term is equivalent to the proportional derivative (PD) control, and the last term of the control law is a robust term that suppresses the modeling error of the neural network.
From the expression of the controller, it is obvious that the controller does not need to solve the Jacobian inverse matrix. In actual control, the loser can be obtained by τ = J T q F x   19 .
Substituting Equations (24) and (25) into Equation (23), we can obtain:
{ θ } T · Ξ q + E M q x ¨ + { A } T · Z z + E C z x ˙ + { B } T · H q + E G q = F x
Substituting the control law (27) into the above formula, we can obtain:
{ θ } T · E q + E M q x ¨ + { A } T · Z z + E C z x ˙ + { B } T · H q + E G q = { θ ˆ } T · Ξ q x ¨ r + { A ˆ } T · Z z x ˙ r + { B ˆ } T · H q + K r + k s sgn r
Substituting X =   x ˙ r r and x ¨ r   =   x ¨ r r ˙ into the above equation, we can obtain:
{ θ } T · θ q + E M q x ¨ r r ˙ + { A } T · Z z + E C z x ˙ r r + { B } T · H q + E G q = { θ ˆ } T · θ q x ¨ r + { A ˆ } T · Z z x ˙ r + B ˆ · { H q + K r + k s sgn r
Substituting Equations (24) and (25) into the above equations, we can obtain:
M f q r ˙ + C x q , q ˙ r + K r + k s sgn r = θ · B q } x ¨ r + { A ˆ · Z z x ˙ r + { B ˜ } · H q + E
For the closed-loop system, if K > 0 , k , > E parallel, and the adaptive law is designed as:
θ ˆ ˙ k = Γ k · ξ k q x ¨ t r k α ˆ ˙ k = Q k · ξ k z x ˙ r r k β ˆ ˙ k = N η k q r k
Among these, Γ k = Γ k T > 0 ; Q k = Q k T > 0 ; N k = N k T > 0 and θ ˆ k and α ˆ k are the vectors of θ ˆ k and a ˆ k , respectively; then, θ ˆ k , α ˆ t · β ˆ k L , e L 2 * L * , e is continuous. And e 0 and e ˙ 0 when t .
According to an integral line Lyapunov function proposed in Reference, the stability can be analyzed as follows:
R V = 1 2 r T M k q r + 1 2 k = 1 n θ k T Γ k 1 θ k + 1 2 k = 1 n α k T Q k 1 α k + 1 2 k = 1 n β ^ k T N k 1 β ^ k
Among these, Γ k ,   Q k ,   and   N k   are positive definite pair-second matrices. Derivative with respect to V, we obtain:
R V ˙ = r T M k r ˙ + 1 2 r T M ˙ k r + k = 1 n θ ¯ k T Γ k 1 θ ˙ k + k = 1 n α k T Q k 1 α ˙ k + k = 1 n β ^ k T N k β ˙ k
Since the matrix M ˙ x q 2 C x q , q ˙ is obliquely symmetric, then r T M ˙ x 2 C x r = 0 , and the below formula can be obtained:
V ˙ = r M s r ˙ + C s r k = 1 n θ ¯ k Γ k 1 θ ^ ˙ k + k = 1 n α l Q k 1 α ˙ k k = 1 n β ¯ k N k 1 β ^ ˙ k
Substituting Equation (33) into the above equation, we can obtain:
V ˙ = r K r k s r sgn r + k = 1 n θ k } · { ξ k q } x ¨ t r k + k = 1 n α k · ξ k z x ˙ * r k + k = 1 n β k η k q r k + r E k = 1 n θ k Γ k θ ^ ˙ k + k = 1 n α k Q k α ^ ˙ k k = 1 n β ^ k N k 1 β ^ ˙ k
Substituting the adaptive law (32) into the Equation (37), and combining the inequality k n > E parallel, we can obtain:
V ˙ = r T K r k x r T sgn r + r T E 0
Convergence Analysis:
(1)
From V ˙ r K r 0 and K > 0 , from the lemma e L 2 n L n , e ˙ L 2 n , e , e∈L2n, e is continuous; then, when t , e 0 , e ˙ 0
(2)
From V ˙ r K , we get 0 V t V 0 , t 0 . Therefore, when V t L , there are   θ k , α ˆ k , β k L ,   and   θ ˆ k , α ˆ k , β ˆ k L   .

4. Simulation Study

This section presents the simulation results to evaluate the operation of the proposed adaptive neuro controller. In this section, the proposed control scheme is applied to a two-link project, the mathematical model of which was developed in the Solid works package and imported in the MATLAB package by the second generation of Simulink [22,23].
Initially, the first link moves, but the second one does not. It is shown that after the training of the first link, external disturbances are worked out. Then, the second link starts moving, and the first one stops. After that, both links move, exerting dynamic disturbances on each link. The dynamic equation can be written as:
M q q ¨ + C q , q ˙ q ˙ + G q = τ d
M q = q 1 + 2 γ cos q 2 q 1 + q 2 cos q 2 q 1 + q 2 cos q 2 q 1
C q , q ˙ = q 2   q ˙ 2 sin q 2 q 2 q ˙ 1 +   q ˙ 2 sin q 2 q 2   q 1 sin q 2 ˙ 0
G q = g cos q 1 + g   cos q 1 + q 2 g   cos q 1 + q 2
where the known parameters are j = 12, q1 = 9, q2 = 8, g = 9.8 [23] and the external interferences of the system are d = d 1 + d 2 e + d e ˙ , d1 = 2, d2 = 3, d3 = 6. Assuming that the expected instructions for tracking the link angle and angular velocity are the following equation:
q 1 d = 1 + 0.2 sin 0.5 π t
q 2 d = 1 0.2 cos 0.5 π t
then the initial state of the system is [q1 q2]^T = [0.6 0.3]^T, Δ M = 0.6 M , Δ C = 0.6 C , Δ G = 0.6 G . When modeling, using the formula of the control law and the formula of the adaptive law, α = 2 ,   γ = 20 , k = 0.001 . In the neural network, the parameters of the Gaussian function were set to c = [−2 −1 0 1 2] and b = 3, and the initial weight was 0.1. The model of the manipulator is shown in Figure 1.
The simulation parameters of the robotic arm were as follows:
L(1): ‘qlim’, [−180 180] * deg; ‘m’, 0,… ‘Jm’, 200 × 10−6,… ‘G’, −62.6111,… ‘B’, 1.48 × 10−3
L(2): ‘qlim’, [−180 180] * deg; ‘m’, 17.4,… ‘Jm’, 200 × 10−6,… ‘G’, 107.815,… ‘B’, 0.817 × 10−3
(1)
Simulation Modeling of Elman Neural Networks
In the Matlab/Simulink system, an artificial neural network model was created to control a manipulative robot, containing an input layer of 15 neurons and a hidden layer varying from 12 to 19 neurons that have local feedback through delay lines.
It can be seen from the Figure 2 and Figure 3 that in the case of adaptive compensation, there was a certain degree of disturbance at the initial stage, and the angular velocity and position tended to converge.
It can be seen from the Figure 4 that the disturbances were compensated and that the robot control process was quite satisfactory.
The experimental results also demonstrate that the proposed method was sufficiently resistant to dynamic perturbations due to unknown situations. The proposed control method is original and successfully exploits the advantages of SMC, neural networks, and adaptive control.
(2)
Simulation Modeling of RBF Neural Networks
For the approximation of each element of M x q and G x q , it is assumed that the input vector of the RBF neural network is q and the number of hidden layer points is nine. For C x q , q ˙ , the input vector q , q ˙ of the RBF neural network is assumed and the number of design hidden layer points is seven.
The parameters of all Gaussian functions are taken as c i = 1.5 1.0 0.5 0 0.5 1.0 1.5 and b i = 10, The initial threshold of the neural network is set to zero in the simulation. The control law adopts Formula (26), and the adaptive law adopts Formula (31). The benefit is chosen as K = 30 0 0 30 , k s = 0.5 . From Lemma 2, it is desirable that Λ = 15 0 0 15 . The parameters of the adaptive law (31) are taken as \ Γ k = diag 2.0 , Q k = diag 0.10 and N k = diag 5.0 . The simulation results are shown in Figure 5 and Figure 6.
As can be seen from the Figure 5 and Figure 6, at the beginning of the simulation, the error value was relatively large due to the neural network learning phase of the control input. When the neural network compensator passed the learning stage, the errors were basically eliminated, and the motion trajectory and the estimated value converged in the RBF neural network relatively quickly, but the neural network training and compensation accuracy were lower than Elman neural network. The unknown perturbation was almost completely cancelled after 1 s.
From Figure 7 can be seen that since the tracking trajectory was not a continuous excitation, the estimated values M ˆ x q ,   C ˆ x q , q ˙ ,   and G ˆ x q did not converge to M x q ,   C x q , q ˙ , and G x q , which is often encountered in practical engineering.
From the simulation of the two neural networks, it can be seen from Figure 8 and Figure 9 that although the RBF neural network has faster training and learning speed under the same trajectory planning. But Elman neural network is better in terms of compensation accuracy for external disturbances and errors. However, when there were more interference conditions and uncertain environments, the overall training results of the RBF were better.
In theory, as enough neurons are added to the hidden layer of the Elman network, higher learning expectations will be obtained. However, in the actual simulation, once the number of neurons in the hidden layer exceeds 60, the neural network will be in a state of over-learning and the accuracy will drop.

5. Conclusions

Therefore, summarizing the research results, the following conclusions can be drawn: During the research process, the quality of a robot to perform a specific task depends not only on the quality of the manufacturing materials and the quality of the moving parts, but also on the quality of the mathematical model of the robot. The control efficiency and accuracy of the robot is based on the analysis of its dynamic model, reducing the error between the planned trajectory and the actual trajectory. This paper presents two different neural networks for manipulating nonlinear dynamic systems of robots by developing models of adaptive neural control schemes based on Elman networks. The choice of network architecture is sound. Simulations of the RBF adaptive neural network show that, despite the faster response time, in the absence of continuous excitation, the system does not converge and the error value is much larger than that of the Elman network. However, if the simulation is performed locally, or if the system dynamics structure is complex and there are many external disturbances, the response time and accuracy of the RBF adaptive neural network will be better. The rotation model of the planar manipulator is described by the dynamic model equation and the output equation. The simulation and training of an adaptive neural network controller confirms its use in the presence of imprecise dynamic structural data and unknown external disturbances. A computer simulation was applied to the optimal control model, tracking the rotation angle of the manipulator, which confirmed the theoretical position and demonstrated its high operating efficiency. This method can also be applied to the simulation of multi-degree-of-freedom manipulators. At the same time, the improved model can be used for the adaptive control method of the manipulator.

Author Contributions

Conceptualization, methodology, visualization, software, validation, formal analysis, Z.Y., Y.K.; data curation, Z.Y.; writing—original draft preparation, Z.Y., Y.K. and L.X.; writing—review and editing, Z.Y., Y.K. and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

The strategic academic leadership program ‘Priority 2030’ (Agreement 075-15-2021-1333 dated 30 September 2021).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Numerical data and simulations are available for academic purposes upon request by contacting the author.

Acknowledgments

The authors would like to thank the Peter the Great St. Petersburg Polytechnic University (SPBSUTU) for providing all facilities during this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duka, A.V. Neural Network based Inverse Kinematics Solution for Trajectory Tracking of a Robotic Arm. Procedia Technol. 2014, 12, 20–27. [Google Scholar] [CrossRef]
  2. Arseniev, D.G.; Overmeyer, L.; Kälviäinen, H.; Katalinić, B. Cyber-Physical Systems and Control; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  3. Islam, S.; Liu, X.P. Robust Sliding Mode Control for Robot Manipulators. IEEE Trans. Ind. Electron. 2011, 58, 2444–2453. [Google Scholar] [CrossRef]
  4. Yazdanpanah, M.J.; KarimianKhosrowshahi, G. Robust Control of Mobile Robots Using the Computed Torque Plus H∞ Compensation Method. Available online: https://www.sciencegate.app/document/10.1109/cdc.2003.1273069 (accessed on 29 June 2022).
  5. Rostova, E.N.; Rostov, N.V.; Yan, Y.Z. Neural network compensation of dynamic errors in a position control system of a robot manipulator. Comput. Telecommun. Control. 2020, 64, 53–64. [Google Scholar] [CrossRef]
  6. Yesildirak, A.; Lewis, F.W.; Yesildirak, S.J. Neural Network Control of Robot Manipulators and Non-Linear Systems; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  7. Kara, K.; Missoum, T.E.; Hemsas, K.E.; Hadjili, M.L. Control of a robotic manipulator using neural network based predictive control. 2010. In Proceedings of the 17th IEEE International Conference on Electronics, Circuits and Systems, Athens, Greece, 12–15 December 2010; pp. 1104–1107. [Google Scholar] [CrossRef]
  8. Seshagiri, S.; Khalil, H. Output Feedback Control of Nonlinear Systems Using RBF Neural Networks. IEEE Trans. Neural Netw. 2000, 11, 69–79. [Google Scholar] [CrossRef] [PubMed]
  9. Tetko, V.I.; Kůrková, V.; Karpov, P.; Theis, F. Artificial Neural Networks and Machine Learning–ICANN 2019: Deep Learning: 28th International Conference on Artificial Neural Networks, Munich, Germany, September 17–19, 2019, Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  10. Dou, M.; Qin, C.; Li, G.; Wang, C. Research on Calculation Method of Free flow Discharge Based on Artificial Neural Network and Regression Analysis. Flow Meas. Instrum. 2020, 72, 101707. [Google Scholar] [CrossRef]
  11. Ren, G.; Cao, Y.; Wen, S.; Huang, T.; Zeng, Z. A Modified Elman Neural Network with a New Learning Rate Scheme. Neurocomputing 2018, 286, 11–18. [Google Scholar] [CrossRef]
  12. Design and Implementation of a RoBO-2L MATLAB Toolbox for a Motion Control of a Robotic Manipulator. Available online: https://ieeexplore.ieee.org/document/7473678/ (accessed on 30 June 2022).
  13. Cheng, Y.-C.; Qi, W.-M.; Cai, W.-Y. Dynamic properties of Elman and modified Elman neural network. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002; pp. 637–640. [Google Scholar] [CrossRef]
  14. Beheim, L.; Zitouni, A.; Belloir, F. New RBF neural network classifier with optimized hidden neurons number. WSEAS Trans. Syst. 2004, 2, 467–472. [Google Scholar]
  15. Song, Q.; Meng, G.J.; Yang, L.; Du, D.Q.; Mao, X.F. Comparison between BP and RBF Neural Network Pattern Recognition Process Applied in the Droplet Analyzer. Appl. Mech. Mater. 2014, 543–547, 2333–2336. [Google Scholar] [CrossRef]
  16. Luo, B.; Liu, D.; Yang, X.; Ma, H. H ∞  Control Synthesis for Linear Parabolic PDE Systems with Model-Free Policy Iteration. In Advances in Neural Networks—ISNN 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 81–90. [Google Scholar] [CrossRef]
  17. Ge, S.S.; Hang, C.C.; Woon, L.C. Adaptive neural network control of robot manipulators in task space. IEEE Trans. Ind. Electron. 1997, 44, 746–752. [Google Scholar] [CrossRef]
  18. Chen, C.; Liu, Z.; Xie, K.; Zhang, Y.; Chen, C.L.P. Adaptive neural control of MIMO stochastic systems with unknown high-frequency gains. Inf. Sci. 2017, 418, 513–530. [Google Scholar] [CrossRef]
  19. Chen, Y.; Liu, J.; Wang, H.; Pan, Z.; Han, S. Model-free based adaptive RBF neural network control for a rehabilitation exoskeleton. In Proceedings of the 2019 Chinese Control and Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 4208–4213. [Google Scholar] [CrossRef]
  20. Wang, M.; Yang, A. Dynamic Learning from Adaptive Neural Control of Robot Manipulators with Prescribed Performance. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2244–2255. [Google Scholar] [CrossRef]
  21. Tran, M.-D.; Kang, H.-J. Nonsingular Terminal Sliding Mode Control of Uncertain Second-Order Nonlinear Systems. Math. Probl. Eng. 2015, 2015, e181737. [Google Scholar] [CrossRef]
  22. Ortega, R.; Spong, M.W. Adaptive motion control of rigid robots: A tutorial. In Proceedings of the 27th IEEE Conference on Decision and Control, Austin, TX, USA, 7–9 December 1988; pp. 1575–1584. [Google Scholar] [CrossRef]
  23. Zabikhifar, S.; Markasi, A.; Yuschenko, A. Two link manipulator control using fuzzy sliding mode approach. Her. Bauman Mosc. State Tech. Univ. Ser. Instrum. Eng. 2015, 6, 30–45. [Google Scholar] [CrossRef]
Figure 1. Mechanical model of the robotic manipulator.
Figure 1. Mechanical model of the robotic manipulator.
Robotics 11 00083 g001
Figure 2. Approximation of the positions of link 1 and link 2 in the case of adaptive Elman compensation.
Figure 2. Approximation of the positions of link 1 and link 2 in the case of adaptive Elman compensation.
Robotics 11 00083 g002
Figure 3. Approximation of the angular velocity link 1 and link 2 in the case of adaptive Elman compensation.
Figure 3. Approximation of the angular velocity link 1 and link 2 in the case of adaptive Elman compensation.
Robotics 11 00083 g003
Figure 4. Approximation for the variables M x q ,   C x q , q ˙ and G x q by Elman neural network.
Figure 4. Approximation for the variables M x q ,   C x q , q ˙ and G x q by Elman neural network.
Robotics 11 00083 g004
Figure 5. Approximation of the positions of link 1 and link 2 in the case of adaptive RBF compensation.
Figure 5. Approximation of the positions of link 1 and link 2 in the case of adaptive RBF compensation.
Robotics 11 00083 g005
Figure 6. Approximation of the angular velocity link 1 and link 2 in the case of adaptive RBF compensation.
Figure 6. Approximation of the angular velocity link 1 and link 2 in the case of adaptive RBF compensation.
Robotics 11 00083 g006
Figure 7. Approximation for the variables M x q , C x q , q ˙ and G x q by RBF adaptive neural network.
Figure 7. Approximation for the variables M x q , C x q , q ˙ and G x q by RBF adaptive neural network.
Robotics 11 00083 g007
Figure 8. External Disturbance Compensation by Elman Networks and RBF Networks.
Figure 8. External Disturbance Compensation by Elman Networks and RBF Networks.
Robotics 11 00083 g008
Figure 9. Comparative analysis of external disturbance compensation of the Elman network and RBF network.
Figure 9. Comparative analysis of external disturbance compensation of the Elman network and RBF network.
Robotics 11 00083 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, Z.; Klochkov, Y.; Xi, L. Improving the Accuracy of a Robot by Using Neural Networks (Neural Compensators and Nonlinear Dynamics). Robotics 2022, 11, 83. https://doi.org/10.3390/robotics11040083

AMA Style

Yan Z, Klochkov Y, Xi L. Improving the Accuracy of a Robot by Using Neural Networks (Neural Compensators and Nonlinear Dynamics). Robotics. 2022; 11(4):83. https://doi.org/10.3390/robotics11040083

Chicago/Turabian Style

Yan, Zhengjie, Yury Klochkov, and Lin Xi. 2022. "Improving the Accuracy of a Robot by Using Neural Networks (Neural Compensators and Nonlinear Dynamics)" Robotics 11, no. 4: 83. https://doi.org/10.3390/robotics11040083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop