Abstract
This paper proposes a learning control framework for the robotic manipulator’s dynamic tracking task demanding fixed-time convergence and constrained output. In contrast with model-dependent methods, the proposed solution deals with unknown manipulator dynamics and external disturbances by virtue of a recurrent neural network (RNN)-based online approximator. First, a time-varying tangent-type barrier Lyapunov function (BLF) is introduced to construct a fixed-time virtual controller. Then, the RNN approximator is embedded in the closed-loop system to compensate for the lumped unknown term in the feedforward loop. Finally, we devise a novel fixed-time, output-constrained neural learning controller by integrating the BLF and RNN approximator into the main framework of the dynamic surface control (DSC). The proposed scheme not only guarantees the tracking errors converge to the small neighborhoods about the origin in a fixed time, but also preserves the actual trajectories always within the prescribed ranges and thus improves the tracking accuracy. Experiment results illustrate the excellent tracking performance and verify the effectiveness of the online RNN estimate for unknown dynamics and external disturbances.
1. Introduction
Robotic manipulators are widely used in industrial production, social services, and other fields owing to their unique configurational advantages []. However, some control issues cannot be ignored by developers and scholars. For example, the unique configuration complicates dynamics modeling []. The friction is affected by speed, temperature, and other factors, and thus is difficult to model accurately []. All these issues make high-precision control a challenging task []. Many control strategies have been proposed for manipulator motion, such as robust control [], backstepping control [], dynamic surface control (DSC) [], adaptive control [], and neural network (NN)-based adaptive control [].
While the adaptive control approach, as an effective control scheme, is useful for various nonlinear dynamic systems, the estimated accuracy for unknown parameters is very limited due to the limitation of simple adaptive laws []. To tackle the model uncertainties, intelligent methods such as the NN [], fuzzy logic theory [], or Gaussian process regression [] can be adopted. Studies on the NN show that it has an exceptional ability to mimic continuous nonlinear functions, which is wildly utilized in the field of automatic control [], system and parameter identification [], and machine learning []. More recently, the combination of NNs and adaptive control has been considered a useful control scheme, proved by fruitful research results which have both higher tracking accuracy and estimated accuracy []. Although the NN-based adaptive control has less computational burden than the fuzzy logic-based adaptive control [], the estimated accuracy of NNs may be reduced by the improper combination of the control framework [].
According to the propagation mechanism in NNs, NNs can be classified into two groups: feedforward NN (FNN) and recurrent NN (RNN). The radial basis function NN (RBFNN) is the most representative three-layer FNN. RNN is characterized by its capacity to capture, memorize, and reuse dynamic responses through the signal recurrent loops []. That is why RNN has received much attention and become a popular approximation approach [,]. In the field of tracking control, RNN has been implemented on manipulators and other systems for the dynamic tracking task. One way is online approximation []. The other way is offline training []. Both ways are effective, and for offline training, it can be adopted as long as the RNN compensation with low-frequency updates can meet our accuracy requirements [].
For better transient tracking performance, the prescribed performance control (PPC) scheme or the barrier Lyapunov function (BLF) scheme can be incorporated into the controller [,]. Although both schemes can achieve the tracking performance of constrained output, it is more difficult to use the PPC for reasonable controller design than the BLF, and it is more likely to lead to design defects from the perspective of stability theory []. Typical BLFs include the logarithm-based BLF (Log-BLF) [] and the tangent-based BLF (Tan-BLF) []. Limited by the form of Log-BLF, the Log-BLF is unavailable as the predefined output constraint tends to infinity. However, when compared with the Log-BLF, the Tan-BLF is globally available for any predefined output constraint. Consequently, the Tan-BLF is a more general and practical approach for the control of complex and uncertain systems with or without output constraints [].
Most of the existing control schemes can only ensure asymptotic stability. From the perspective of practical engineering, it is more valuable to accomplish the control task in the desired time. Compared with asymptotic control, finite-time control can guarantee the errors converge to small neighborhoods about the origin within the finite settling time. However, the settling time of finite-time control is heavily dependent on the initial states of the system. To remove this restriction, the concept of fixed-time control (FTC) has been proposed. FTC is actually a specific situation of finite-time control, whose settling time is bounded and the upper bound of the settling time does not rely on the initial states of the system. For a class of fixed-time backstepping-based control framework, the fixed-time convergence for both virtual controller and real controller needs to be guaranteed [], and thus it is necessary to consider the design of the fixed-time virtual controller (FTVC) and the fixed-time real controller (FTRC) comprehensively. Moreover, the correct combination of FTC and BLF can better meet the designer’s requirements for control performance. However, considering the form of Tan-BLF, both the FTVC and FTRC should be rigorously and carefully designed, otherwise the undesirable singularity problem will occur [].
However, in the authors’ opinion, some issues exist in studies that have completed intelligent control for robotic manipulator systems, such as that the design parameters for control signals should be carefully selected to match the real control responsiveness [] otherwise the control performance may deteriorate; the boundedness of the intermediate error in the RNN learning system should be well addressed. Motivated by the aforementioned issues, for the real-world multi-degree-of-freedom (DoF) manipulator without prior knowledge of dynamics, the dynamic tracking problem is studied using a novel fixed-time, output-constrained RNN learning framework. The stability analysis in theory and the control performance in practice are presented in detail. In contrast to most existing studies and controllers, the distinctive features of the proposed method are given below.
- We propose a controller with the capability of disturbance rejection, uncertainties compensation, as well as constrained output, which satisfies practically fixed-time stable (PFTS).
- An accurate estimate of unknown dynamics and external disturbances is achieved by using an online RNN approximator. A novel RNN dynamics is derived based on Taylor expansion linearization. Furthermore, the RNN dynamics in a robust form are specially constructed to ensure the system stability more reasonably.
- For a class of time-varying BLFs, the Tan-BLF, which is a more general approach than Log-BLF [] for the control with or without output constraints, is introduced to construct the FTVC, and the corresponding control input term derived from the Tan-BLF is incorporated as a component into the FTRC.
To the best of our knowledge, there are really limited existing control frameworks that have such performance. The main block diagram for the proposed control framework is illustrated in Figure 1a. The remaining paper is arranged below. Section 2 and Section 3 describe the problem formulation and RNN design, respectively. Section 4 presents the closed-loop control scheme. Section 5 conducts experiments, with conclusions summarized in Section 6.

Figure 1.
Overall diagram of the fixed-time output-constrained RNN learning control: (a) Main control block diagram; (b) n-DoF serial manipulator; (c) Structure of RNN; (d) Different types of control input terms derived from BLFs. Take and , for example.
2. Problem Statement and Preliminaries
2.1. Notations
Throughout the full text, and refer to the maximum and minimum eigenvalues of a matrix, respectively. means Hadamard product. .
2.2. Problem Statement and Formulation
Based on the discussions mentioned above, the objective of this work is to design an FTC for the dynamic tracking of a manipulator system to achieve high-accuracy model-independent control without predefined constraint violations in a theoretically exact fixed time. Specifically, the state tracking errors can converge to the small neighborhoods about the origin and the angle tracking errors will be always confined to the prescribed ranges.
The manipulator dynamics should be modeled first. The serial configuration of an n-DoF fixed-base robotic manipulator is depicted in Figure 1b. In the presence of disturbances, the dynamics equation of n-DoF manipulator can be described as
where denotes the joint angle of the n-DoF manipulator. and denote the joint angular velocity and acceleration, respectively. denotes the generalized inertia matrix. stands for Coriolis and the centrifugal forces. denotes the gravitational force matrix. u represents the control input. represents the external disturbances.
2.3. Preliminaries
The following mathematical theories and reasonable assumptions are introduced which will be used to prove the correctness of the designed control framework.
Property 1
([]). is skew symmetric.
Property 2
([]). , , and are all bounded.
Assumption 1.
There exists a positive constant such that the external disturbances are bounded with .
Lemma 1
([]). Consider a nonlinear system , where and are used to describe a continuous vector field. Suppose that there exists a positive definite function such that , where , , , , and , then the nonlinear system is PFTS and will converge to the following compact set:
Lemma 2
([]). For any , , , , the following inequalities hold:
Lemma 3
([]). For any , , , , , the following inequality hold:
3. Recurrent Neural Network (RNN) Construction
3.1. RNN Design
In pursuit of better tracking performance, a three-layer NN, utilizing the recurrent loops, was specially designed. The structure of the devised RNN is shown in Figure 1c, in which signifies the time delay. Thus, RNN can capture dynamic responses with recurrent loops through . Details for the RNN are as follows.
Layer 1: Input layer
In the first layer, all needed signals will be first collected, proceeded, and output to the next layer:
where is the input signal; is the output of Layer 1 and denotes the mapping with respect to the input signal. In this paper, is simply selected to be the same as the input signal.
Layer 2: Activation layer
Different from the representative RBFNN, the recurrent signals are considered and reused in activation function for RNN:
where is the activation function vector (note that is an abbreviated form of for saving space); is the recurrent neural weight vector; is previous time’s obtained by ; represents the width of the Gaussian basis function; the center of receptive field is evenly spaced according to .
Layer 3: Output layer
Finally, the output of RNN can be obtained using the activation function and forward neural weight:
where is the final output of RNN; is the forward neural weight vector. Note that in this RNN, all neural weights and can be tuned based on a desired optimization objective. This completes the construction of RNN.
3.2. RNN Approximator
To solve unknown terms in (1), an online RNN approximator is developed. According to the universal approximation theorem [,], the RNN approximator is able to mimic a continuous unknown vector field , which can be expressed as , where is the ideal activation function ( is an abbreviated form of ); and are optimal matrices; the error vector is bounded with , where is a positive constant.
In practice, can be estimated as , where and are the estimates of and , respectively. Note that is the real output of the RNN approximator. To facilitate subsequent mathematical operations of the RNN, some useful formula transformations are given below. The error between and can be formulated as
where and . Taylor expansion linearization is adopted to derive the recurrent neural weight’ dynamics, and is converted to the following partially linear form around :
where is the first term of Taylor expansion. is the estimated error. is a high-order term. Coefficient matrix is expressed as
Note that is available for users, and is a bounded error vector since and are bounded and is an ideal constant vector. Therefore, the proposed RNN approximator can be used for approximation calculations.
4. Fixed-Time Output-Constrained RNN Learning Control Framework
In this section, a novel fixed-time output-constrained RNN learning controller, designed with the main framework of the DSC, is adopted to solve the dynamic tracking problem of manipulators in the presence of known model dynamics and external disturbances.
4.1. Fixed-Time Controller Design
For the tracking control of a second-order manipulator system (1), we consider the following two index errors
where is the time derivative of ; is a filtered virtual controller to be designed later; and stand for the desired joint angle and angular velocity, respectively. Define the output error constraint as
where the error constraint is predefined as , where is the prescribed time-varying bound expressed as
and where and represent the maximum and minimum of , respectively. It should be noted that is restricted and strictly monotonic decreasing to with . determines the convergence rate of . Differentiating error (13) with respect to time and substituting manipulator system (1) into (13) we have
where stands for the desired angular acceleration; is the time derivative of υ.
The controller is recursively designed in the following three steps.
Step 1: FTVC design
First, construct a time-varying Tan-BLF and the corresponding control input term derived from the Tan-BLF as follows:
where is the designed positive constant that determines the maximum of . Taking the time derivative of the Lyapunov function (17) and substituting (13) into it, we have
Considering the form of Tan-BLF, the FTVC cannot be directly used, and the DSC technique is introduced to avoid subsequent complex computations for the time derivative of the FTVC. Design a following first-order low-pass filter (FOLPF)
where is a small time constant and denotes the FTVC. Define the filter error as
Differentiating the filter error (21) with respect to time and substituting (20) and (21) into it yields
where and is an unknown continuous vector.
Then, the designed FTVC is selected as
where , , , and .
Remark 1.
For the FTVC (23), three terms should be discussed:
For , when , by L’Hospital’s Rule, we can obtain
For , when and , it is easy to obtain
Similarly, for any .
Thus, the undesirable singularity problem can never occur in the FTVC (23).
Then, substituting (21) and (23) into (19) yields
Note that . By Young’s inequality, we have
Substituting (27) into (26) and applying Lemma 2 yields
where and .
Remark 2.
For the time-varying , when , i.e., the output constraint is removed, by L’Hospital’s Rule, we can obtain
Thus, the Tan-BLF actually degenerates into the standard quadratic form, which implies that the Tan-BLF is applicable for any . Then, consider a Log-BLF and its control input term derived from BLF as follows:
Each type of is shown in Figure 1d. It can be observed that two types of have the same trend of change. However, when , we have
Consequently, the Log-BLF (30) becomes unavailable. To sum up, compared with the Log-BLF-based controller, the proposed framework is a more general methodology for controls with or without output constraints.
Step 2: FTRC design
Construct the second Lyapunov function
Taking the time derivative of , substituting (16) into it, and using Property 1 yields
Note that , , , and are unknown in advance, and the lumped uncertainties in (34) can be defined as . To deal with , the RNN approximator is utilized and embedded in the controller, i.e., with . Accordingly, the RNN-based FTRC is designed as
where and .
Substituting FTRC (35) and error (9) into (34) yields
where . Note that is bounded satisfying , where is a positive constant. Thus, we have the following inequality:
Substituting (37) into (36), we further have
where and .
Step 3: Online RNN learning design
The weights of RNN are designed to be online-tuned based on the RNN dynamics derived from the Lyapunov theory. Then, the RNN dynamics in a robust form for the RNN-based FTRC (35) are designed as
where and are diagonal positive definite matrices; and are small positive constants, and then construct the third Lyapunov function as
Differentiating with respect to time and substituting the RNN dynamics (39) into it, we have
Note that , and then substituting this inequality into (41) yields
where . Then, for , the following two cases should be considered:
- (1)
- If , we have
- (2)
- If , applying Lemma 3, we can obtain
Similarly, construct the fourth Lyapunov function
and then the time derivative of has the following similar form to :
where and .
4.2. Stability Analysis for Closed-Loop System
After the above subsystems’ design and analysis, we propose Theorem 1 for the devised main control framework.
Theorem 1.
Consider the manipulator system (1) under Assumption 1 together with the FTVC (23), FTRC(35), FOLPF (20), and RNN dynamics (39), and then the closed-loop system is PFTS. Moreover, signals , , , , and remain in the following compact sets within fixed time:
where is a positive constant defined in the sequel.
Proof.
Construct the following final Lyapunov function:
where . Differentiating with respect to time along (22) yields
Consider the following compact sets:
where and are positive constants. It follows that is also a compact set. From (22) all of the error variables in are bounded in the compact set , which means that a positive constant exists with , and then using Young’s inequality we can obtain
where is a designed positive constant. Combining (52) and (54) yields
Finally, taking the time derivative of along (28), (38), (47), (49), and (55) yields
where and . Note that by choosing the appropriate parameters we can guarantee . The discussions for are similar to those for , and the following inequality therefore holds:
Substituting (57) into (56) yields
where and .
Applying Lemma 2 to (58), we have
where , , and .
Finally, by Lemma 1, will converge to the following compact set:
where . The fixed settling time is bounded as
Define the mentioned positive constant . Using (51), we have
Using the maximum of arctan function and rearranging (62) yields
i.e., . Thus, the closed-loop signal remains in the compact set . Likewise, the closed-loop signals , , , and can converge to the compact sets , , , and defined as (50) within the fixed time, respectively. Furthermore, the tracking errors can never exceed the prescribed time-varying bounds, i.e., , provided that . This completes the proof of Theorem 1. □
Remark 3.
The control performance indicators involved in this paper mainly include system stability, settling time, and tracking error. To realize the fixed-time stability for the closed-loop system, the fixed-time stability criteria should be satisfied, namely, the Lyapunov function and . In this case, the settling time satisfies . It is clear that the boundary of the settling time is not affected by initial conditions. Different settling time ranges can be adjusted prior based on the practical performance requirements.
Remark 4.
The control performance of the closed-loop system depends on the following adjustable design parameters: (1) It can be seen from the stability analysis that parameters , , , and can adjust the convergence error and convergence accuracy at the same time. By selecting larger and , and appropriate and , the convergence speed can be improved and the final error can be reduced. (2) The exponents and determine the boundary of the convergence time and influence the convergence accuracy. Choosing suitable exponents can reduce the convergence time and improve the convergence accuracy. (3) If and are selected too small, the RNN estimate is not accurate enough. If they are selected too large, the overshoot of the system becomes larger. The above adjustable design parameters should be carefully selected by trial and error so as to achieve the satisfactory control performance.
Remark 5.
Most of the existing output-constrained control schemes can only ensure the asymptotic stability for the closed-loop manipulator system. Alternatively, this work extends the output-constrained control scheme to the fixed-time convergence for the closed-loop system, and thus the joint tracking errors are not only confined to the prescribed time-varying bounds, but also converged within the fixed time. To the best of our knowledge, there are really limited existing control frameworks that have such performance under the same conditions.
5. Experiments
To verify the correctness and feasibility of the proposed control framework, experiments were performed on the real-world RGM-based robotic manipulator system. The experiments consisted of three comparison studies, which verify the superiority of the BLF, the correctness and effectiveness of the RNN, and the fixed-time convergence of the proposed controller, respectively. The features and differences of each compared controller or case are illustrated in Table 1.

Table 1.
Features and differences of each compared controller or case.
5.1. Experimental System Setup
The robotic system is a self-designed manipulator (see Figure 2a) based on GRM joints providing the torque control interface (RGM integration joints, Kollmorgen Co., Radford, VA, USA), and no prior knowledge of manipulator dynamics can serve this study. An ARM board with RT-LINUX system is responsible for running the proposed control algorithm written in the C language. Real-time joint angles, angular velocities, and control signals run on a CAN bus. The communication frequency of the CAN bus is 400 Hz. The update frequency of the online RNN approximator is also 400 Hz.

Figure 2.
Experimental setup: (a) Experimental platform of the manipulator system; (b) Manipulator joints used in experiments, where the red arrows represent how the joints rotate.
The control task is dynamic tracking in joint space, and two manipulator joints shown in Figure 2b are asked to track the desired trajectory given as rad, . The initial states of the manipulator are given as deg and . To ensure safe working of the manipulator, the maximum control torque is set as 40 Nm. Control parameters of the proposed FTVC (23) and FTRC (35) are empirically selected as , , , , , , , and , . For the RNN approximator, , , , , , and , . Note that the control parameters of the proposed controller should be carefully set according to the instruction provided in Remark 4. The number of activation node . The width of Gaussian basis function and is uniformly spaced in . For the FOLPF (20), and . For the sake of convenience, the proposed fixed-time output-constrained RNN learning controller (35) is denoted as Controller 1.
5.2. Comparison Studies: Role of the BLF
First, to show the advantage of the BLF, the ablation scheme is utilized to compare the performance with and without the BLF. Specifically, referring to (29), the compared controller (Controller 2) is a simplified form of Controller 1, in which while the output constraint is removed, the DSC framework and RNN approximator are retained. To make a fair comparison, the RNN approximator parameters, FOLPF parameter, and the other control parameters of Controller 2 are chosen the same as those of Controller 1. The initial states of the manipulator and RNN approximator are also selected the same as those of Controller 1. Then, Controller 2 and its FTVC are designed as
According to the FTVC in (64), the initial states of the FOLPF are selected as . It should be noted that no external disturbance is acted on the system in this subsection, i.e., . The experimental results of Controllers 1 and 2 are presented in Figure 3, Figure 4, Figure 5 and Figure 6. Figure 3 and Figure 4 depict the joint tracking results and corresponding tracking errors of Joint 1 and Joint 2, respectively. Figure 6a shows the control torques. Figure 6b presents the RNN estimate values of . Figure 6c represents the comparison between the filtered virtual control signal and virtual control signal under Controller 1. From Figure 3 and Figure 4, the dynamic tracking is successful under each controller even in the presence of unknown manipulator dynamics, and the real joint trajectories did not exceed the prescribed time-varying ranges under Controller 1. With the results of joint tracking errors, the tracking accuracy of Controller 1 is slightly higher than that of Controller 2 for both joints. In addition, RMSEs of and within the time interval 2–32 s are calculated and listed in Table 2. From Table 2, the RMSE of Controller 1 is obviously smaller than that of Controller 2 for each joint. In Figure 3 and Figure 4, while the difference in tracking performance between Controllers 1 and 2 is small, it does not negate the role of the BLF since the steady-state error is very small under the selected control parameters, which results in a small control input according to (18). Figure 5 shows more in-depth comparisons of the tracking performance between Controller 1 and Controller 2. Figure 5a,b show the tracking errors in Figure 3 and Figure 4 again, and present the corresponding control input . It is easy to see that while increases as increases, its maximum value is only 0.02 Nm within the time interval 2–32 s, so the role of the BLF is small when the tracking error is very small. Figure 5c,d show the tracking errors when selecting smaller control parameters , , , and for both controllers, i.e., , , , and . It can be observed that the tracking accuracy of Controller 1 is obviously higher than that of Controller 2 for both joints, which means the use of BLF can indeed reduce the error under the same conditions, and thus Controller 1 performs better than Controller 2. In addition, Figure 6a shows different control torques under Controllers 1 and 2 to illustrate the different operating states of the two controllers. Besides, the control torques for both controllers remain in the preset safe range for practical applications from the partially enlarged views of Figure 6a. To sum up, the above comparison results and analyses demonstrate the effectiveness and superiority of the BLF. In addition, under Controller 1, the filtered virtual control signal and the virtual control signal are almost identical all the time, and the virtual control signals can be well filtered through the FOLPF from the partially enlarged views of Figure 6c. Accordingly, the effectiveness of the DSC-based control framework can be verified. We can also observe from Figure 6b that the estimate values of Controllers 1 and 2 are almost the same. These results are reasonable since the real lumped uncertainties in the two experiments are the same. Therefore, the results in this subsection show that the estimate capability of the RNN is relatively stable. Further verification of the RNN will be studied in the next subsection.

Figure 3.
Trajectory tracking results and tracking errors of Joint 1 under different controllers and cases.

Figure 4.
Trajectory tracking results and tracking errors of Joint 2 under different controllers and cases.

Figure 5.
Comparisons of tracking error between Controller 1 and Controller 2: (a,b) are tracking errors and the corresponding control input ; (c,d) are tracking errors under smaller control parameters.

Figure 6.
Comparison results: (a) Control torques; (b) Estimate values of ; (c) Comparison between the filtered virtual control signal and virtual control signal under Controller 1.

Table 2.
Comparisons of RMSE between Controller 1 and Controller 2.
5.3. Performance Verification of the RNN
To verify the correctness and effectiveness of the RNN approximator, the overall performance of Controller 1 is compared with that of the classical proportional-derivative (PD) controller (Controller 3) under external disturbances. The control parameters for Controller 1 are chosen the same as those in Section 5.1. Specifically, we introduce an external disturbance signal acting on Joint 1 at 16 s, which is given as
Controller 3 is designed as
where and are control parameters of Controller 3 selected as and .
The results of Controllers 1 and 3 under external disturbances are denoted as Controller 1-II and Controller 3-II, respectively. For convenience of comparisons, the results of Controller 3 without external disturbances are also given and shown in Figure 3, Figure 4, Figure 6a and Figure 7a. A video of experiment for Controller 1-II is provided in Supplementary Materials.

Figure 7.
Comparisons of disturbance rejection: (a) Trajectory tracking errors of Joint 1; (b) Norms of forward neural weights for Joint 1; (c) Estimate values of , where black arrows represent the correspondence between curves and variables; (d) Norms of recurrent neural weights for Joint 1.
Comparisons of disturbance rejection between Controller 1 and Controller 3 are shown in Figure 7. Figure 7a depicts the trajectory tracking errors of Joint 1. Figure 7b,d show the norms of forward neural weights and recurrent neural weights for Joint 1, respectively. Figure 7c presents the estimate values of . First, it is clearly seen from Figure 3, Figure 4 and Figure 7a that the tracking performance of Controller 3 is poorer than that of Controller 1 regardless of the presence of the external disturbance. From Figure 7a, the tracking error of Controller 3 obviously becomes larger after introducing the external disturbance, while the tracking error of Controller 1 changes little after introducing the external disturbance. In addition, RMSEs within the time interval 16–32 s are calculated and listed in Table 3 for comparisons of disturbance rejection. After introducing the disturbance, the RMSE of Controller 1 increased by 5.21% while the RMSE of Controller 3 increased by 18.93%. Second, we design a method to indirectly evaluate the estimate accuracy of the RNN since the real lumped uncertainties is unavailable. Note that the introduced sinusoidal external disturbance is unavailable for controllers while it is available for users; hence let of Controller 1-II subtract of Controller 1 and denote the difference as . If is close to the real external disturbance , the lumped uncertainties are well estimated and the correctness of the RNN approximator can be guaranteed. For convenience of expression, of Controller 1-II and of Controller 1 are denoted as and , respectively. For better graphical expression, we do not calculate , but define , which is represented by a gray solid line in Figure 7c. In this way, the estimated performance of the RNN can be evaluated by comparing the curves of and . The results of Figure 7c show that the RNN is accurate in estimating the external disturbance since and are very close. To sum up, the above results of two aspects (tracking error and estimate accuracy) demonstrate the correctness and effectiveness of the RNN approximator, the strong capability of disturbance rejection for the proposed controller, as well as the better overall performance for the proposed controller compared with the PD controller. From Figure 7b,d, the forward neural weights and recurrent neural weights are constantly tuned to cope with the time-varying lumped uncertainties in different situations.

Table 3.
Comparisons of disturbance rejection between Controller 1 and Controller 3.
Remark 6.
In the RNN, while the network is becoming more complex due to the addition of recurrent loops, it can be adopted as long as the computational frequency of hardware for NN can meet the requirement of the control frequency. According to the existing literature, there is no online estimate that the recurrent loop and NN dynamics in a robust form are both considered, and the excellent overall performance for the closed-loop system is achieved. Thus, the robust online RNN approximator is first developed for tracking control.
5.4. Fixed-Time Convergence Verification
To verify the fixed-time convergence of Controller 1, we conduct two cases containing different initial states of the manipulator. Two cases are denoted as Controller 1-III and Controller 1-IV, respectively. Considering the restriction on feasibility condition for Tan-BLF, should be selected to stay within the prescribed ranges. In Controller 1-III, deg, , and . In Controller 1-IV, deg, , and . In this section, the control parameters for both cases are chosen the same as those in Section 5.1 and .
The experimental results of Controllers 1-III and 1-IV are provided in Figure 3, Figure 4 and Figure 6. It can be observed that the real joint trajectories of the two cases did not exceed the prescribed time-varying ranges, and the settling time and steady-state errors of Controllers 1, 1-III, and 1-IV are almost the same even with different initial conditions of the closed-loop system. These results indicate that Controller 1 exhibits the fixed-time convergence ability, whose settling time is bounded and independent of the initial system states. Additionally, from Figure 6b, the estimate values of Controllers 1, 1-III, and 1-IV are almost the same; hence the effectiveness of the RNN approximator is verified again, and these results demonstrate the stable estimate capability of the RNN approximator.
6. Conclusions
In this paper, the authors were devoted to designing a fixed-time RNN learning control framework using the Tan-BLF for the dynamic tracking of manipulators. The experimental results show that the proposed RNN method not only possesses the competence as an online approximator of uncertain systems, even in the presence of unknown manipulator dynamics and external disturbances, but also achieves better anti-disturbance performance compared with the PD controller. Such performance demonstrates that the designed NN makes the controller have significant online adaptable ability and stable estimate capability. In addition, the proposed control framework not only guarantees the joint tracking errors converge to the small neighborhoods about the origin in fixed time, but it also always preserves the joint angles within the prescribed ranges and thus improves the tracking accuracy. It is the first time that the dynamic tracking control problem of a real-world manipulator with unknown dynamics is studied based on an RNN learning approach with the consideration of a time-varying constraint method. In this study, only two joints of the manipulator are used to verify the effectiveness and superiority of the proposed algorithm, and we will apply this algorithm to all joints of the manipulator to solve the task space control problem in future research.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23125614/s1, A video of experiment is provided in supplementary materials.
Author Contributions
Conceptualization, Q.S. and X.D.; Methodology, Q.S. and R.H.; Validation, Q.S. and X.Z.; Writing—Original Draft Preparation, Q.S.; Writing—Review & Editing, C.L. and Q.S.; Supervision, X.D. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Key R&D Program of China under Grant 2021YFC0122600.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Haddadin, S.; De Luca, A.; Albu-Schaffer, A. Robot Collisions: A Survey on Detection, Isolation, and Identification. IEEE Trans. Robot. 2017, 33, 1292–1312. [Google Scholar] [CrossRef]
- Sage, H.G.; De Mathelin, M.F.; Ostertag, E. Robust control of robot manipulators: A survey. Int. J. Control 1999, 72, 1498–1522. [Google Scholar] [CrossRef]
- Armstrong-Hélouvry, B.; Dupont, P.; De Wit, C.C. A survey of models, analysis tools and compensation methods for the control of machines with friction. Automatica 1994, 30, 1083–1138. [Google Scholar] [CrossRef]
- Nicolis, D.; Allevi, F.; Rocco, P. Operational Space Model Predictive Sliding Mode Control for Redundant Manipulators. IEEE Trans. Robot. 2020, 36, 1348–1355. [Google Scholar] [CrossRef]
- Yan, Z.; Lai, X.; Meng, Q.; Wu, M. A Novel Robust Control Method for Motion Control of Uncertain Single-Link Flexible-Joint Manipulator. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1671–1678. [Google Scholar] [CrossRef]
- Van, M.; Mavrovouniotis, M.; Ge, S.S. An Adaptive Backstepping Nonsingular Fast Terminal Sliding Mode Control for Robust Fault Tolerant Control of Robot Manipulators. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1448–1458. [Google Scholar] [CrossRef]
- Guo, X.; Zhang, H.; Sun, J.; Zhou, Y. Fixed-Time Fuzzy Adaptive Control of Manipulator Systems Under Multiple Constraints: A Modified Dynamic Surface Control Approach. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 2522–2532. [Google Scholar] [CrossRef]
- Fan, Y.; Zhu, Z.; Li, Z.; Yang, C. Neural adaptive with impedance learning control for uncertain cooperative multiple robot manipulators. Eur. J. Control 2023, 70, 100769. [Google Scholar] [CrossRef]
- Jia, S.; Shan, J. Finite-Time Trajectory Tracking Control of Space Manipulator Under Actuator Saturation. IEEE Trans. Ind. Electron. 2020, 67, 2086–2096. [Google Scholar] [CrossRef]
- Wang, N.; Hao, F. Event-triggered sliding mode control with adaptive neural networks for uncertain nonlinear systems. Neurocomputing 2021, 436, 184–197. [Google Scholar] [CrossRef]
- Gupta, M.M.; Rao, D.H. On the principles of fuzzy neural networks. Fuzzy Sets Syst. 1994, 61, 1–18. [Google Scholar] [CrossRef]
- He, K.; Deng, Y.; Wang, G.; Sun, X.; Sun, Y.; Chen, Z. Learning-Based Trajectory Tracking and Balance Control for Bicycle Robots With a Pendulum: A Gaussian Process Approach. IEEE/ASME Trans. Mechatron. 2022, 27, 634–644. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Y.; Tie, M. Hybrid adaptive learning neural network control for steer-by-wire systems via sigmoid tracking differentiator and disturbance observer. Eng. Appl. Artif. Intell. 2021, 104, 104393. [Google Scholar] [CrossRef]
- Xie, S.; Ren, J. Recurrent-Neural-Network-Based Predictive Control of Piezo Actuators for Trajectory Tracking. IEEE/ASME Trans. Mechatron. 2019, 24, 2885–2896. [Google Scholar] [CrossRef]
- Shi, Q.-X.; Li, C.-S.; Guo, B.-Q.; Wang, Y.-G.; Tian, H.-Y.; Wen, H.; Meng, F.-S.; Duan, X.-G. Manipulator-based autonomous inspections at road checkpoints: Application of faster YOLO for detecting large objects. Def. Technol. 2022, 18, 937–951. [Google Scholar] [CrossRef]
- Peng, J.; Dubay, R.; Ding, S. Observer-based adaptive neural control of robotic systems with prescribed performance. Appl. Soft Comput. 2022, 114, 108142. [Google Scholar] [CrossRef]
- Wang, F.; Liu, Z.; Zhang, Y.; Chen, C.L.P. Adaptive fuzzy visual tracking control for manipulator with quantized saturation input. Nonlinear Dyn. 2017, 89, 1241–1258. [Google Scholar] [CrossRef]
- Yao, Q. Neural adaptive learning synchronization of second-order uncertain chaotic systems with prescribed performance guarantees. Chaos Solitons Fractals 2021, 152, 111434. [Google Scholar] [CrossRef]
- Castaneda, C.E.; Loukianov, A.G.; Sanchez, E.N.; Castillo-Toledo, B. Discrete-Time Neural Sliding-Mode Block Control for a DC Motor With Controlled Flux. IEEE Trans. Ind. Electron. 2012, 59, 1194–1207. [Google Scholar] [CrossRef]
- Li, D.; Zhou, J.; Liu, Y. Recurrent-neural-network-based unscented Kalman filter for estimating and compensating the random drift of MEMS gyroscopes in real time. Mech. Syst. Sig. Process. 2021, 147, 107057. [Google Scholar] [CrossRef]
- Fei, J.; Lu, C. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1275–1286. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Wen, J.T. Neural-Learning Trajectory Tracking Control of Flexible-Joint Robot Manipulators with Unknown Dynamics. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 128–135. [Google Scholar]
- Zhu, Y.; Qiao, J.; Guo, L. Adaptive Sliding Mode Disturbance Observer-Based Composite Control with Prescribed Performance of Space Manipulators for Target Capturing. IEEE Trans. Ind. Electron. 2019, 66, 1973–1983. [Google Scholar] [CrossRef]
- Yang, C.; Huang, D.; He, W.; Cheng, L. Neural Control of Robot Manipulators With Trajectory Tracking Constraints and Input Saturation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4231–4242. [Google Scholar] [CrossRef]
- Shi, Y.; Shao, X.; Zhang, W. Neural observer-based quantized output feedback control for MEMS gyroscopes with guaranteed transient performance. Aerosp. Sci. Technol. 2020, 105, 106055. [Google Scholar] [CrossRef]
- Tee, K.P.; Ren, B.; Ge, S.S. Control of nonlinear systems with time-varying output constraints. Automatica 2011, 47, 2511–2516. [Google Scholar] [CrossRef]
- Jin, X.; Xu, J.-X. Iterative learning control for output-constrained systems with both parametric and nonparametric uncertainties. Automatica 2013, 49, 2508–2516. [Google Scholar] [CrossRef]
- Rahimi, H.N.; Howard, I.; Cui, L. Neural adaptive tracking control for an uncertain robot manipulator with time-varying joint space constraints. Mech. Syst. Sig. Process. 2018, 112, 44–60. [Google Scholar] [CrossRef]
- Yao, Q. Fixed-time neural adaptive fault-tolerant control for space manipulator under output constraints. Acta Astronaut. 2023, 203, 483–494. [Google Scholar] [CrossRef]
- Lin, J.; Liu, H.; Tian, X. Neural network-based prescribed performance adaptive finite-time formation control of multiple underactuated surface vessels with collision avoidance. J. Frankl. Inst. -Eng. Appl. Math. 2022, 359, 5174–5205. [Google Scholar] [CrossRef]
- Yao, Q. Adaptive trajectory tracking control of a free-flying space manipulator with guaranteed prescribed performance and actuator saturation. Acta Astronaut. 2021, 185, 283–298. [Google Scholar] [CrossRef]
- Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; John Wiley and Sons: New York, NY, USA, 2006. [Google Scholar]
- Jiang, B.; Hu, Q.; Friswell, M.I. Fixed-Time Attitude Control for Rigid Spacecraft With Actuator Saturation and Faults. IEEE Trans. Control Syst. Technol. 2016, 24, 1892–1898. [Google Scholar] [CrossRef]
- Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
- Ge, S.S.; Zhang, J. Neural-network control of nonaffine nonlinear system with zero dynamics by state and output feedback. IEEE Trans. Neural Netw. 2003, 14, 900–918. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).