Next Article in Journal
Partial Autocorrelation Diagnostics for Count Time Series
Previous Article in Journal
Observations on the Lovász θ-Function, Graph Capacity, Eigenvalues, and Strong Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prescribed Performance Back-Stepping Tracking Control for a Class of High-Order Nonlinear Systems via a Disturbance Observer

College of Mathematics and System Science, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(1), 103; https://doi.org/10.3390/e25010103
Submission received: 14 November 2022 / Revised: 29 December 2022 / Accepted: 30 December 2022 / Published: 4 January 2023
(This article belongs to the Section Complexity)

Abstract

:
Due to the widespread presence of disturbances in practical engineering and widespread applications of high-order systems, this paper first pays attention to a class of high-order strict-feedback nonlinear systems subject to bounded disturbance and investigates the prescribed performance tracking control and anti-disturbance control problems. A novel composite control protocol using the technique of a disturbance observer—prescribed performance control—is designed using the back-stepping method. The disturbance observer is introduced for estimating and compensating for unknown disturbances in each step, and the prescribed performance specifications guarantee both transient and steady-state performance of the tracking error to improve the control performance and result in better disturbance rejection. Moreover, the technique of adding a power integrator is modified to tackle controller design problems for the high-order systems. The Lyapunov function method is utilized for rigorous stability analysis. It is revealed that while the control performance completely remains in the prescribed bound, all states in the closed-loop system are input-to-state stable, and the tracking error and the disturbances estimating error asymptotically converge to zero simultaneously. Then, the feasibility and effectiveness of the proposed control protocol are verified by a simulation result.

1. Introduction

Any practical system in life has different degrees of nonlinear properties. Since the Dutch meteorologist Lorenz opened the door to mankind’s understanding of the nonlinear world in the 1960s, the control problem of nonlinear systems has been in full swing [1,2,3,4]. There are many main research methods, such as differential geometry methods, passivity theory, Lyapunov stability theory, and so on. Due to the complexity and diversity of nonlinear systems, different methods are applicable to different problems.
In many practical control problems, one often needs to quantitatively characterize the impact of errors and disturbances in measurement elements or execution mechanisms in the system, so the stability of the forced system is the most fundamental issue in control theory. There are two approaches to describing the stability of forced systems with different ideas: one uses the operator theory technique. The input–output stability based on the small gain theorem can obtain great results in applying to a infinite network [5]. The other is the application of the state space approach. Sontag in [6] introduced the concept of input-to-state stability for the first time to systematically describe the stability of a forced system by the state space approach. This approach replaces the finite gain with a nonlinear gain function, resulting in fewer limitations, and the advantage of having multiple equivalent representations (e.g., Lyapunov-like description of the function) makes it more compatible with existing control theory. Nowadays, it has been widely used in neural networks [7], H control [8], inverse optimal control [9], stochastic systems [10], time-delay systems [11], switching systems [12], discrete systems [13], etc.
Moreover, due to practical engineering needs, tracking control is often one of the main control objectives for nonlinear forced dynamic systems—for example, see [14,15,16] and references therein—which aims to make the system output asymptotically track our expected dynamic signal by applying inputs to the system.
Although there have been many related studies about tracking control, in order to better apply it to engineering systems, studies in the past decades started to be keen on solving the tracking control problem with external disturbances in the system. With the increasing requirements for control accuracy, many control methods have been proposed for systems with various disturbances and parameter uncertainties: for example, nonlinear H control [8], sliding mode control [17], output regulation theory [18], and adaptive methods [19]. Although the above methods effectively attenuate or reject disturbances, the output regulation theory requires the derivative of the controller [20], whereas most other methods sacrifice nominal system performance when achieving robustness [21]. To avoid above effects, Nakao et al. proposed disturbance observer-based control (DOBC) in the late 1980s, which can estimate unknown disturbances that are difficult to be measured directly by sensors, and compensates for the equivalent disturbances in the feed-forward channel. Owing to its excellent interference rejection capability, DOBC has been widely used in various practical systems, such as servo systems [22] and robot systems [23]. Reference [24] proposed a method combining DOBC and sliding mode control to estimate the disturbance and attenuate it using a designed sliding surface. However, the tracking errors in the existing studies [25] do not reach asymptotic convergence. Subsequently, the back-stepping method was proposed [26]. This method decomposes a complex nonlinear system into subsystems and uses the introduction of virtual control law and the design of Lyapunov functions for each subsystem to complete the controller design for the entire main system. With the development of the back-stepping method, the construction of controllers have a systematic approach, which was then introduced in the literature [27] along with DOBC for the disturbed nonlinear problems.
Most existing tracking control problems have focused on solving stability problems without considering constraints on the transient performance of the system before reaching steady state, which is often limited by factors such as hardware and interaction with humans. More than a decade ago, Bechlioulis and Rovithakis first proposed prescribed performance control (PPC) in [28,29] for nonlinear simple input simple output (SISO) system and multi-input multi-output (MIMO) systems, where the transient and steady-state performance of the system can be constrained simultaneously using the performance function, and asymptotic tracking control is achieved. PPC is gradually being used to solve various control problems [30,31]. In [32], Chen and Yang introduced a novel performance function into PPC and developed a controller based on the back-stepping method to achieve tracking control with prescribed performance.
It should be noted that, on the one hand although the PPC approach allows the tracking error to be always kept within the prescribed constraint, it is still difficult to design a specific composite controller to simultaneously guarantee the prescribed performance and achieve tracking control if external disturbances are present in the system. Bai et al. only considered the prescribed performance tracking control problem for high-order nonlinear systems without and external disturbances [31]. On the other hand, most studies of control methods that introduce disturbance observers, to our knowledge, did not consider prescribed performance specifications [22,23,24,25]. In addition, it is also noted that there has been few previous studies dealing with the problem of high-order nonlinear systems. Chen et al. investigated an adaptive output feedback control law for first-order, unknown, pure-feedback nonlinear systems with external disturbances [32]. As far as we know, there has been no work so far considering simultaneous prescribed performance tracking control and anti-disturbance control problems for high-order nonlinear systems, which gave the motivation to carry out this work.
Inspired by this, a composite controller was designed for a class of high-order strict feedback systems with external disturbances to address the prescribed performance-tracking problem. Concretely, a disturbance observer is used to estimate and compensate for external disturbances; a prescribed performance function and a transformation function are used to convert the original prescribed performance tracking control into an unconstrained system with the same stability; and finally, the DOB technique and a back-stepping method incorporating adding a power-integrator technique for dealing with high-order systems problems are used in the controller. This control scheme ensures the prescribed transient and steady-state behaviors of the tracking errors while enabling a high-order nonlinear system with stronger disturbance rejection. This article has the following contributions relative to existing results:
(1)
The proposed composite controller solves the output tracking problem of a class of high-order nonlinear systems, where the system states are stabilized and the tracking error converges to zero.
(2)
Differently from the methods designed to attenuate disturbances to a specified area [33,34], the error systems of nonvanishing disturbance estimating converge to zero. That is, this control scheme can eliminate the affect of disturbances on output.
(3)
Without the external disturbances, the nominal control performance of the proposed protocol remained.
(4)
Unlike the previous results in [30,35], the performance indices of the system regarding transient steady-state behavior are not only allowed to evolve within a prescribed bound, but also guarantee zero steady-state output tracking error.
The rest of this article is organized as follows. In Section 2, the problem formulation and preliminaries are given. In Section 3, the composite control protocol is constructed by utilizing back-stepping technique, and the stability and the prescribed performance are analyzed. In Section 4, an example is given to show the effectiveness. A short conclusion is given in Section 5.

2. Problem Formulation and Preliminaries

2.1. Problem Formulation

Following notation is used throughout the paper. For a given vector x = ( x 1 , , x n ) T , x = ( x 1 2 + + x n 2 ) 1 2 is the Euclidean norm of vector x. x ¯ i = ( x 1 , , x i ) T , i = 1 , , n .
Consider a class of high-order strict-feedback nonlinear systems modeled by
x ˙ 1 ( t ) = x 2 p 1 ( t ) + ϕ 1 ( x ¯ 1 ( t ) ) + d 1 ( t ) , x ˙ i ( t ) = x i + 1 p i ( t ) + ϕ i ( x ¯ i ( t ) ) + d i ( t ) , i = 2 , , n 1 , x ˙ n ( t ) = u p n ( t ) + ϕ n ( x ¯ n ( t ) ) + d n ( t ) , y ( t ) = x 1 ( t ) ,
where x i ( t ) R , i = 1 , , n denotes the system state, and u ( t ) R , d i ( t ) R , and y ( t ) R , respectively, represent control input, unknown disturbances, and system output. z , i = 1 , , n are some positive odd integers, and ϕ i ( · ) , i = 1 , , n are known nonlinear continuous functions. Moreover, we define y d ( t ) as the given reference signal and E ( t ) as the tracking error between y ( t ) and y d ( t ) .
In order to solve the anti-disturbance and prescribed performance tracking control problem of system (1), we aimed to develop a novel composite controller that meets the following control objectives for system (1):
  • The tracking error E ( t ) converges to zero and achieves the prescribed performance in both transient state and steady state.
  • All states in the closed-loop system are stable.
Assumption 1. 
Define p i , i = 1 , , n as positive odd integers:
(i) 
p is considered as p = max { p i } , i = 1 , , n .
(ii) 
p i satisfies: p + 1 p i p p i + 1 + 1 , i = 1 , , n 1 .
Assumption 2. 
The disturbances satisfy the following conditions:
(i) 
d i ( t ) and the derivatives of d ˙ i ( t ) are bounded, and d i ( t ) are nonvanishing.
(ii) 
d ˙ i ( t ) 0 as t .
Assumption 3. 
The expected signal y d ( t ) and its i-order derivative y d ( i ) ( t ) are bounded, and they are known.
Remark 1. 
Assumption 1 is utilized to ensure the reasonableness of the adding a power-integrator technology. Assumption 2 is widely used in the field of disturbance estimation for the reason that the derivatives of the disturbances will affect the convergence of error dynamics equation, and this assumption is essential in analyzing the stability of disturbance estimation error. It is worth pointing out that Assumption 3 is a standard assumption for output tracking control of nonlinear systems, and similar assumptions can be found in the literature [14,15,16].
Definition 1. 
A continuous function η ( t ) : [ 0 , b ) [ 0 , ) is said to belong to class K if it is strictly increasing and η ( 0 ) = 0 . Additionally, it is said to belong to class K if b = and η ( s ) as s .
Lemma 1 
([36]). Consider the following system:
x ˙ ( t ) = f ( t , x ( t ) , u ( t ) ) , x ( t ) R n , u ( t ) R m .
Let V ( t , x ) be a continuously differentiable function such that
π 1 ( x ( t ) ) V ( t , x ) π 2 ( x ( t ) ) , V t + V x f ( t , x ( t ) , u ( t ) ) π 3 ( x ( t ) ) , x ( t ) π 4 ( u ( t ) ) > 0 ,
where π 1 ( · ) and π 2 ( · ) are class K functions, π 3 ( · ) is a continuous positive definite function, and π 4 ( · ) is a class K function. Then, system (2) is input-to-state stable (ISS).
Lemma 2 
([36]). Consider system (2). If it is globally input-to-state stable, lim t u ( t ) = 0 , then the state of system (2) will asymptotically converge to zero; that is, lim t x ( t ) = 0 .
To complete this section, we give other existing inequalities as lemmas, which the main method of the modified adding a power-integrator technology is based on, and it will be utilized to deal with the error system.
Lemma 3 
([37]). For any real valued function x, y and any positive odd integer q 1 , the inequality is as follows: x q y q q x y ( x q 1 + y q 1 ) .
Lemma 4 
([37]). For any designed constant q 0 , the following inequality hold:
x + y q max { 2 q 1 , 1 } ( x q + y q ) .
In this paper, because of the uncertainty of q = p i 1 , the relation between q and1 requires further discussion. Thus, to simplify the proof later, the situation that includes both above cases ( q < 1 and q 1 ) is summed up in the following inequality:
x + y q 2 q ( x q + y q ) .
Lemma 5 
([38]). For any positive real numbers m and n and any real number ε > 0 , there are always any real variables x and y and a function a ( x , y ) such that the following inequality with two forms holds:
x m y n m m + n ε x m + n + m m + n ε m n y m + n ,
a ( x , y ) x m y n c ( x , y ) x m + n + m m + n m ( m + n ) c ( x , y ) m n a ( x , y ) m + n n y m + n ,
where c ( x , y ) > 0 .

2.2. Prescribed Performance

In order to guarantee the transient and steady-state performance of tracking error E ( t ) = y ( t ) y d ( t ) simultaneously, a positive decreasing smooth function ν ( t ) : R + R + was chosen as the prescribed performance function (PPF) with lim t ν ( t ) = ν > 0 . In this research, ν ( t ) was chosen as
ν ( t ) = ( ν 0 ν ) e ρ ( t ) t + ν ,
ρ ( t ) = ρ tanh ( ε 0 ( t t 0 ) ) + 1 2 ,
where ρ , ε 0 , t 0 and ν 0 > ν are positive parameters which will be designed according to practical requirements.
Utilizing the similar idea from research [29], the prescribed performance can be guaranteed by achieving
δ ̲ ν ( t ) < E ( t ) < δ ¯ ν ( t ) , t > 0 ,
where δ ̲ > 0 and δ ¯ > 0 are constants. Additionally, it must be pointed out that ν 0 , δ ̲ and δ ¯ should be chosen such that δ ̲ ν ( 0 ) < E ( 0 ) < δ ¯ ν ( 0 ) .
Remark 2. 
The principle of PPC is to transform the tracking error constrained by the performance function into an unconstrained error that is better handled. Reference [29] stated that the prescribed performance is guaranteed when the tracking error converges to an arbitrarily small set of residuals and the convergence rate and maximum overshoot are less than prescribed values. Therefore, to solve the control problem with prescribed performance (3), a smooth and strictly increasing function T ( χ ( t ) ) of the transformed error χ ( t ) R is defined which satisfies
(i) 
δ ̲ < T ( χ ( t ) ) < δ ¯ , χ ( t ) L ,
(ii) 
lim χ ( t ) + T ( χ ( t ) ) = δ ¯ , lim χ ( t ) T ( χ ( t ) ) = δ ̲ .
For the properties of T ( χ ( t ) ) , condition (3) equals
E ( t ) = ν ( t ) T ( χ ( t ) ) .
As T ( χ ( t ) ) is strictly monotonically increasing and ν ( t ) ν > 0 , the inverse function can be written as
χ ( t ) = T 1 E ( t ) ν ( t ) .
From the above analysis, it can be observed that if χ ( t ) is bounded, then the prescribed performance (3) can be guaranteed. To facilitate the control design to stabilize χ ( t ) in (4), the transformed function T ( χ ( t ) ) can be chosen as
T ( χ ( t ) ) = δ ¯ e χ ( t ) δ ̲ e χ ( t ) e χ ( t ) + e χ ( t ) ;
moreover, from (5), the transformed error χ ( t ) can be deduced as
χ ( t ) = T 1 E ( t ) ν ( t ) = 1 2 ln T ( χ ( t ) ) + δ ̲ δ ¯ T ( χ ( t ) ) .
Therefore, based on (1), (4), and (5), the derivation of transformed error χ ( t ) is derived as
χ ˙ ( t ) = 1 2 ln T ( χ ( t ) ) + δ ̲ δ ¯ T ( χ ( t ) ) = 1 2 ln T ( χ ( t ) ) + δ ̲ ln δ ¯ T ( χ ( t ) ) = 1 2 1 T ( χ ( t ) ) + δ ̲ 1 δ ¯ T ( χ ( t ) ) T ˙ ( χ ( t ) ) = 1 2 1 T ( χ ( t ) ) + δ ̲ 1 δ ¯ T ( χ ( t ) ) E ˙ ( t ) ν ( t ) E ( t ) ν ˙ ( t ) ν 2 ( t ) = Γ x 2 p 1 ( t ) + ϕ 1 ( x ¯ 1 ( t ) ) + d 1 ( t ) y d ( t ) ˙ Υ ,
where Γ = 1 2 ν ( t ) 1 E ( t ) ν ( t ) + δ ̲ 1 δ ¯ E ( t ) ν ( t ) > 0 and Υ = E ν ˙ ( t ) ν ( t ) .
Remark 3. 
It is noted that δ ¯ ν ( 0 ) specifies the upper bound of the maximum overshoot, and δ ̲ ν ( 0 ) represents the lower one; the decreasing rate of ν ( t ) embodies a lower bound on the needed speed of convergence of E ( t ) , which is drawn to ρ ( t ) . Furthermore, on behalf of the maximum allowable size of the tracking error at the steady state, the positive parameter ν = lim t ν ( t ) can be selected to be arbitrarily small to promote the tracking accuracy.

2.3. Disturbance Observer

In system (1), the disturbance d i ( t ) is unknown. For estimating d ^ i ( t ) , the following nonlinear DOB is designed as
d ^ i ( t ) = λ i ( x i ( t ) p i ( t ) ) , p ˙ i ( t ) = x i + 1 p i ( t ) + ϕ i ( x ¯ i ( t ) ) + d ^ i ( t ) ,
where x n + 1 ( t ) = u ( t ) , p i ( t ) represents the internal states of the DOB, and λ i > 0 .
From (7), we know
d ^ ˙ i ( t ) = λ i ( x i ( t ) p i ( t ) ) = λ i ( d i ( t ) d ^ i ( t ) ) .
Let e i ( t ) = d i ( t ) d ^ i ( t ) , based on (1), (7) and (8). Then, the disturbance estimation error system can be described as
e ˙ i ( t ) = d ˙ i ( t ) d ^ ˙ i ( t ) = λ i e i ( t ) + d ˙ i ( t ) .
Remark 4. 
In the actual control system, it is necessary to design a controller with robustness for avoiding the influences of model uncertainty, parameter perturbation, external disturbances, and other factors. The DOB-based controller can effectively eliminate the influence caused by the above factors. In addition, the composite controller with DOB can be divided into two parts, inner ring and outer ring, which is convenient for design and implementation. Concretely speaking, the inner ring can improve the robustness of the system, and the outer one can be flexibly designed to achieve control objection. Additionally, through the compensation of equivalent disturbance by DOB, the system can present the nominal performance, thereby facilitating the design of the outer-ring of controller.

3. Main Results

This section can be divided into two parts. First, a composite controller is recursively designed by means of the back-stepping method and the nonlinear disturbance observer constructed above. Secondly, the main results of this paper are derived from two theorems with strict proofs.

3.1. Composite Controller

χ ˙ 1 = Γ x 2 p 1 ( t ) + ϕ 1 ( x ¯ 1 ( t ) ) + d 1 ( t ) y d ˙ ( t ) Υ , x ˙ i ( t ) = x i + 1 p i ( t ) + ϕ i ( x ¯ i ( t ) ) + d i ( t ) , i = 2 , , n 1 , x ˙ n ( t ) = u p n ( t ) + ϕ n ( x ¯ n ( t ) ) + d n ( t ) . y ( t ) = x 1 ( t ) .
Then, as the preparatory design of the whole composite controller, we introduce
s 1 ( t ) = χ ( t ) 1 2 ln δ ̲ δ ¯ , s i ( t ) = x i ( t ) α i ( t ) , i = 2 , , n , x n + 1 ( t ) = α n + 1 ( t ) = u ( t ) ,
where α i ( t ) denotes the virtual control input to be determined for the ith subsystem.
For simplifying the expression of functions, a function f ( x ( t ) ) can be rewritten as f ( x ) or f in the following analysis. The design procedures of back-stepping composite controller are given as follows:
STEP 1. Consider the first subsystem as
s ˙ 1 ( t ) = Γ x 2 p 1 ( t ) + ϕ 1 ( x ¯ 1 ( t ) ) + d 1 ( t ) y d ˙ ( t ) Υ .
Choose a Lyapunov function as
V 1 = s 1 p p 1 + 2 ( t ) p p 1 + 2 + e 1 p p 1 + 2 ( t ) p p 1 + 2 .
According to (9) and (12), the time derivative of V 1 yields
V ˙ 1 = s 1 p p 1 + 1 s 1 ˙ + e 1 p p 1 + 1 e 1 ˙ = s 1 p p 1 + 1 Γ x 2 p 1 + ϕ 1 ( x ¯ 1 ) + d 1 y d ˙ Υ λ 1 e 1 p p 1 + 2 + e 1 p p 1 + 1 d ˙ 1 = s 1 p p 1 + 1 Γ α 2 p 1 + ϕ 1 ( x ¯ 1 ) + d ^ 1 y d ˙ Υ λ 1 e 1 p p 1 + 2 + e 1 p p 1 + 1 d ˙ 1 + s 1 p p 1 + 1 Γ e 1 + s 1 p p 1 + 1 Γ x 2 p 1 α 2 p 1 ,
through the help of Lemmas 3 and 4, and (11), one gets
| s 1 p p 1 + 1 Γ x 2 p 1 α 2 p 1 | Γ p 1 | s 1 | p p 1 + 1 | x 2 α 2 | ( x 2 p 1 1 + α 2 p 1 1 ) = Γ p 1 | s 1 | p p 1 + 1 | s 2 | ( s 2 + α 2 ) p 1 1 + α 2 p 1 1 Γ p 1 | s 1 | p p 1 + 1 | s 2 | 2 p 1 1 | s 2 | p 1 1 + | α 2 | p 1 1 + | α 2 | p 1 1 = 2 p 1 1 Γ p 1 | s 1 | p p 1 + 1 | s 2 | p 1 + 2 p 1 1 + 1 Γ p 1 | s 1 | p p 1 + 1 | s 2 | | α 2 | p 1 1 ,
and by applying the first inequality of Lemma 5 with m = p p 1 + 1 , n = p 1 , ε = p + 1 p p 1 + 1 1 p 1 2 p 1 , we obtain
Γ p 1 | s 1 | p p 1 + 1 2 p 1 1 | s 2 | p 1 p 1 2 p 1 1 p p 1 + 1 p + 1 p + 1 p p 1 + 1 1 p 1 2 p 1 | s 1 | p + 1 + p 1 2 p 1 1 p + 1 p p 1 + 1 1 p 1 2 p 1 p p 1 + 1 p + 1 | Γ 1 p 1 s 2 | p + 1 1 2 s 1 p + 1 + s 2 p + 1 Γ p + 1 p 1 β 11 ,
where β 11 = p 1 2 p 1 1 p + 1 p p 1 + 1 1 p 1 2 p 1 p p 1 + 1 p + 1 .
Using the same m , n , let ε = p + 1 p p 1 + 1 1 p 1 ( 2 p 1 + 2 ) of the first inequality of Lemma 5. This yields
Γ p 1 | s 1 | p p 1 + 1 ( 2 p 1 1 + 1 ) | s 2 | | α 2 | p 1 1 p 1 ( 2 p 1 1 + 1 ) p p 1 + 1 p + 1 p + 1 p p 1 + 1 1 p 1 ( 2 p 1 + 2 ) | s 1 | p + 1 + p 1 ( 2 p 1 1 + 1 ) p 1 p + 1 p + 1 p p 1 + 1 1 p 1 ( 2 p 1 + 2 ) p p 1 + 1 p + 1 | Γ 1 p 1 s 2 1 p 1 α 2 p 1 1 p 1 | p + 1 1 2 s 1 p + 1 + s 2 p + 1 p 1 Γ p + 1 p 1 β 12 ,
where β 12 = ( 2 p 1 1 + 1 ) p 1 2 p + 1 p + 1 p p 1 + 1 1 p 1 ( 2 p 1 + 2 ) p p 1 + 1 p 1 α 2 ( p + 1 ) ( p 1 1 ) p 1 .
Meanwhile, with the help of the second inequality of Lemma 5, let m = 1 and n = p p 1 + 1 , a ( x , y ) = Γ . We get
| e 1 Γ s 1 p p 1 + 1 | c 1 | e 1 | p p 1 + 2 + a 1 Γ p p 1 + 2 p p 1 + 1 | s 1 | p p 1 + 2 ,
where a 1 = 1 p p 1 + 2 1 ( p p 1 + 2 ) c 1 1 p p 1 + 1 , c 1 > 0 .
Now, we select
α 2 = k 1 s 1 + s 1 p 1 Γ ϕ ( x ¯ 1 ) d 1 ^ + y d + Υ a 1 Γ p p 1 + 2 p p 1 + 1 s 1 1 p 1 ,
where k 1 > 0 .
By substituting (15)–(18) and the control law (19) into (14), the latter is rewritten as
V ˙ 1 k 1 s 1 p p 1 + 2 + s 2 p + 1 Γ p + 1 p 1 β 11 + s 2 p + 1 p 1 Γ p + 1 p 1 β 12 ( λ 1 c 1 ) e 1 p p 1 + 2 + e 1 p p 1 + 1 d ˙ 1 .
STEP 2. Based on s 2 = x 2 α 2 , (10) and (19), we have
s ˙ 2 = x ˙ 2 α ˙ 2 = x 3 p 2 + ϕ 2 ( x ¯ 2 ) + d 2 j = 0 1 α 2 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 1 α 2 y d ( j ) y d ( j + 1 ) α 2 x 1 x 2 p 1 + ϕ 1 ( x ¯ 1 ) + d 1 α 2 d ^ 1 λ 1 e 1 .
Choose the following Lyapunov function
V 2 = V 1 + s 2 p p 2 + 2 p p 2 + 2 + e 2 p p 2 + 2 p p 2 + 2 .
Combine (9), (20) and (21). The derivative of V 2 is depicted by
V ˙ 2 = V ˙ 1 + s 2 p p 2 + 1 s 2 ˙ + e 2 p p 2 + 1 e 2 ˙ k 1 s 1 p p 1 + 2 + s 2 p + 1 Γ p + 1 p 1 β 11 + s 2 p + 1 p 1 Γ p + 1 p 1 β 12 + s 2 p p 2 + 1 ( x 3 p 2 + ϕ 2 ( x ¯ 2 ) + d 2 j = 0 1 α 2 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 1 α 2 y d ( j ) y d ( j + 1 ) α 2 x 1 x 2 p 1 + ϕ 1 ( x ¯ 1 ) + d 1 α 2 d ^ 1 λ 1 e 1 ) ( λ 1 c 1 ) e 1 p p 1 + 2 + e 1 p p 1 + 1 d ˙ 1 λ 2 e 2 p p 2 + 2 + e 2 p p 2 + 1 d ˙ 2 k 1 s 1 p p 1 + 2 s 2 p p 2 + 1 α 2 x 1 e 1 + s 2 p p 2 + 1 ( x 3 p 2 α 3 p 2 ) + s 2 p p 2 + 1 [ α 3 p 2 + s 2 p 2 Γ p + 1 p 1 β 11 + s 2 P ¯ 2 Γ p + 1 p 1 β 12 + ϕ 2 x ¯ 2 + e 2 + d ^ 2 j = 0 1 α 2 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 1 α 2 y d ( j ) y d ( j + 1 ) α 2 x 1 x 2 p 1 + ϕ 1 ( x ¯ 1 ) + d ^ 1 α 2 d ^ 1 λ 1 e 1 ] ( λ 1 c 1 ) e 1 p p 1 + 2 + e 1 p p 1 + 1 d ˙ 1 λ 2 e 2 p p 1 + 2 + e 2 p p 1 + 1 d ˙ 2 ,
where p ¯ 2 = p + 1 p 1 ( p p 2 + 1 ) is a non-negative constant under the second condition of Assumption 1.
Currently, α 3 is designed as
α 3 = [ s 2 p 2 ( a 2 + a ^ 2 + k 2 ) s 2 s 2 p 2 Γ p + 1 p 1 β 11 s 2 P ¯ 2 Γ p + 1 p 1 β 12 ϕ 2 x ¯ 2 d ^ 2 + j = 0 1 α 2 ν ( t ) ( j ) ν ( t ) ( j + 1 ) + j = 0 1 α 2 y d ( j ) y d ( j + 1 ) + α 2 x 1 x 2 p 1 + ϕ 1 ( x ¯ 1 ) + d ^ 1 + α 2 d ^ 1 λ 1 e 1 ] 1 p 2 ,
where k 2 > 0 , a 2 , and a ^ 2 are designed next.
Meanwhile, by applying Lemma 5, it follows that
| e 2 s 2 p p 2 + 1 | c 2 | e 2 | p p 2 + 2 + a 2 | s 2 | p p 2 + 2 , s 2 p p 2 + 1 α 2 x 1 + α 2 d ^ 1 λ 1 e 1 c 1 e 1 p p 2 + 2 + a ^ 2 s 2 p p 2 + 2 ,
where c 2 > 0 , a 2 = p p 2 + 1 p p 2 + 2 1 ( p p 2 + 2 ) c 2 1 p p 2 + 1 , a ^ 2 = p p 2 + 1 p p 2 + 2 1 ( p p 2 + 2 ) c 1 1 p p 2 + 1 α 2 x 1 + α 2 d ^ 1 λ 1 p p 2 + 2 .
A similar argument for (15)–(17) in Step 1 leads to
s 2 p p 2 + 1 ( x 3 p 2 α 3 p 2 ) 2 p 2 1 p 2 | s 2 | p p 2 + 1 | s 3 | p 2 + 2 p 2 1 + 1 p 2 | s 2 | p p 2 + 1 | s 3 | | α 3 | p 2 1 s 2 p + 1 + s 3 p + 1 β 21 + s 3 p + 1 p 2 β 22 ,
where β 21 = p 2 2 p 2 1 p 2 p + 1 p + 1 p p 2 + 1 1 p 2 2 p 2 p p 2 + 1 p 2 , and β 22 = ( 2 p 2 1 + 1 ) p 2 2 p + 1 p + 1 p p 2 + 1 1 p 2 ( 2 p 2 + 2 ) p p 2 + 1 p 2 α 3 ( p + 1 ) ( p 2 1 ) p 2 .
By substituting (25) and (26) and the control law (24) into (23), it is rewritten as
V ˙ 2 j = 1 2 k j s j p p j + 2 + s 3 p + 1 β 21 + s 3 p + 1 p 2 β 22 + j = 1 2 e j p p j + 1 d ˙ j ( λ 2 c 2 ) e 2 p p 2 + 2 ( λ 1 2 c 1 ) e 1 p p 1 + 2 .
STEP 3. Similarly to Step 2, consider s 3 = x 3 α 3 and (24). We have
s ˙ 3 = x ˙ 3 α ˙ 3 = x 4 p 3 + ϕ 3 ( x ¯ 3 ) + d 3 j = 0 2 α 3 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 2 α 3 y d ( j ) y d ( j + 1 ) j = 1 2 α 3 x j x j + 1 p j + ϕ j ( x ¯ j ) + d j j = 1 2 α 3 d ^ j λ j e j .
Choose the following Lyapunov function:
V 3 = V 2 + s 3 p p 3 + 2 p p 3 + 2 + e 3 p p 3 + 2 p p 3 + 2 .
By combining (9), (27), and (28), the derivative of V 3 is concluded as
V ˙ 3 = V ˙ 2 + s 3 p p j + 1 s 3 ˙ + e 3 p p 3 + 1 e 3 ˙ j = 1 2 k j s j p p j + 2 + s 3 p + 1 β 21 + s 3 p + 1 p 2 β 22 + j = 1 3 e j p p i + 1 d ˙ j + s 3 p p 3 + 1 ( x 4 p 3 + ϕ 3 ( x ¯ 3 ) + d 3 j = 0 2 α 3 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 2 α 3 y d ( j ) y d ( j + 1 ) j = 1 2 α 3 x j x j + 1 p j + ϕ j ( x ¯ j ) + d j j = 1 2 α 3 d ^ j λ j e j ) λ 3 e 3 p p 3 + 2 ( λ 1 2 c 1 ) e 1 p p 1 + 2 ( λ 2 c 1 ) e 2 p p 2 + 2 j = 1 2 k j s j p p j + 2 s 3 p p 3 + 1 j = 1 2 α 3 x j e j + s 3 p p 3 + 1 ( x 4 p 3 α 4 p 3 ) + s 3 p p 3 + 1 [ α 4 p 3 + s 3 p 3 β 21 + s 3 p ¯ 3 β 22 + ϕ 3 x ¯ 3 + e 3 + d ^ 3 j = 0 2 α 3 ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 2 α 3 y d ( j ) y d ( j + 1 ) j = 1 2 α 3 x j x j + 1 p j + ϕ j ( x ¯ j ) + d ^ j j = 1 2 α 3 d ^ j λ j e j ] + j = 1 3 e j p p j + 1 d ˙ j λ 3 e 3 p p 3 + 2 ( λ 1 2 c 1 ) e 1 p p 1 + 2 ( λ 2 c 2 ) e 2 p p 2 + 2 ,
where p ¯ 3 = p + 1 p 3 1 ( p p 3 + 1 ) is a non-negative constant under the second condition of Assumption 1.
Currently, α 4 is designed as
α 4 = [ s 3 p 3 ( a 3 + a ^ 3 + k 3 ) s 3 s 3 p 3 β 21 s 3 p ¯ 3 β 22 ϕ 3 x ¯ 3 d ^ 3 + j = 0 2 α 3 ν ( t ) ( j ) ν ( t ) ( j + 1 ) + j = 0 2 α 3 y d ( j ) y d ( j + 1 ) + j = 1 2 α 3 x j x j + 1 p j + ϕ j ( x ¯ j ) + d ^ j + j = 1 2 α 3 d ^ j λ j e j ] 1 p 3 ,
where k 3 > 0 , a 3 and a ^ 3 are designed later.
Meanwhile, by applying Lemma 5, it follows that
| e 3 s 3 p p 3 + 1 | c 3 | e 3 | p p 3 + 2 + a 3 | s 3 | p p 3 + 2 , s 3 p p 3 + 1 j = 1 2 α 3 x j + α 3 d ^ j λ j e j j = 1 2 c j | e j | p p i + 2 + a ^ 3 | s 3 | p p 3 + 2 ,
where
c 3 > 0 , a 3 = p p 3 + 1 p p 3 + 2 1 ( p p 3 + 2 ) c 3 1 p p 3 + 1 , a ^ 3 = j = 1 2 p p 3 + 1 p p 3 + 2 1 ( p p 3 + 2 ) c j 1 p p 3 + 1 α 3 x j + α 3 d ^ j λ j p p 3 + 2 .
Similarly, the processing of (26) in Step 2 leads to
s 3 p p 3 + 1 ( x 4 p 3 α 4 p 3 ) 2 p 3 1 p 3 | s 3 | p p 3 + 1 | s 4 | p 3 + 2 p 3 1 + 1 p 3 | s 3 | p p 3 + 1 | s 4 | | α 4 | p 3 1 s 3 p + 1 + s 4 p + 1 β 31 + s 4 p + 1 p 3 β 32 ,
where β 31 = p 3 2 p 3 1 p 3 p + 1 p + 1 p P 3 + 1 1 p 3 2 p 3 p p 3 + 1 p 3 , and β 32 = ( 2 p 3 1 + 1 ) p 3 2 p + 1 p + 1 p p 3 + 1 1 p 3 ( 2 p 3 + 2 ) p p 3 + 1 p 3 α 4 ( p + 1 ) ( p 3 1 ) p 3 .
By substituting (31) and (32) and the control law (30) into (29), it is rewritten as
V ˙ 3 j = 1 3 k j s j p p j + 2 + s 4 p + 1 β 31 + s 4 p + 1 p 3 β 32 + j = 1 3 e j p p j + 1 d ˙ j ( λ 3 c 3 ) e 3 p p 3 + 2 ( λ 1 3 c 1 ) e 1 p p 1 + 2 ( λ 2 2 c 2 ) e 2 p p 2 + 2 .
STEP i. At Step i 1 with i = 4 , , n 1 , we assume there exists a continuously differential function V i 1 = i = 1 i 1 s j p p j + 2 p p j + 2 + i = 1 i 1 e j p p j + 2 p p j + 2 such that
V ˙ i 1 j = 1 i 1 k j s j p p j + 2 + s i p + 1 β i 1 , 1 + s i p + 1 p i 1 β i 1 , 2 + j = 1 i 1 e j p p j + 1 d ˙ j j = 1 i 1 ( λ j ( i j ) c j ) e j p p j + 2 .
It is obvious that when i = 4 , (34) is (33). In what follows, we will give strict proof that (33) also holds at the ith step:
For this purpose,
s ˙ i = x ˙ i α ˙ i = x i + 1 p i + ϕ i ( x ¯ i ) + d i j = 0 i 1 α i ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 i 1 α i y d ( j ) y d ( j + 1 ) j = 1 i 1 α i x j x j + 1 p j + ϕ j ( x ¯ j ) + d j j = 1 i 1 α i d ^ j λ j e j ,
and the Lyapunov function V i can be chosen as follows:
V i = V i 1 + s i p p i + 2 p p i + 2 + e i p p i + 2 p p i + 2 .
Enlightened by (9), (11), and (34), we get the following inequality spontaneously:
V ˙ i j = 1 i 1 k j s j p p j + 2 s i p p i + 1 j = 1 i 1 α i x j e j + s i p p i + 1 ( x i + 1 p i α i + 1 p i ) + s i p p i + 1 [ α i + 1 p i + s i p i β i 1 , 1 + s i p ¯ i β i 1 , 2 + ϕ i x ¯ i + e i + d ^ i j = 0 i 1 α i ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 i 1 α i y d ( j ) y d ( j + 1 ) j = 1 i 1 α i x j x j + 1 p j + ϕ j ( x ¯ j ) + d ^ j j = 1 i 1 α i d ^ j λ j e j ] + j = 1 i e j p p j + 1 d ˙ j j = 1 i ( λ j ( i j ) c j ) e j p p j + 2 λ i e i p + p i + 2 ,
where p ¯ i = p + 1 p i 1 ( p p i + 1 ) is a non-negative constant under the second condition of Assumption 1.
Currently, α i + 1 is designed as
α i + 1 = [ s i p i ( a i + a ^ i + k i ) s i s i p i β i 1 , 1 s i p ¯ i β i 1 , 2 ϕ i x ¯ i d ^ i + j = 0 i 1 α i ν ( t ) ( j ) ν ( t ) ( j + 1 ) + j = 0 i 1 α i y d ( j ) y d ( j + 1 ) + j = 1 i 1 α i x j x j + 1 p j + ϕ j ( x ¯ j ) + d ^ j + j = 1 i 1 α i d ^ j λ j e j ] 1 p i ,
and k i > 0 , a i , and a ^ i are designed later.
Now applying Lemma 5 again, it follows that
| e i s i p p i + 1 | c i | e i | p p i + 2 + a i | s i | p p i + 2 , s i p p i + 1 j = 1 i 1 α i x j + α i d ^ j λ j e j j = 1 i 1 c j | e j | p p i + 2 + a ^ i | s i | p p i + 2 ,
where
c i > 0 , a i = p p i + 1 p p i + 2 1 ( p p i + 2 ) c i 1 p p i + 1 , a ^ i = j = 1 i 1 p p i + 1 p p i + 2 1 ( p p i + 2 ) c j 1 p p i + 1 α i x j + α i d ^ j λ j p p i + 2 .
Furthermore,
s i p p i + 1 ( x i + 1 p i α i + 1 p i ) 2 p i 1 p i | s i | p p i + 1 | s i + 1 | p i + 2 p i 1 + 1 p i | s i | p p i + 1 | s i + 1 | | α i + 1 | p i 1 s i p + 1 + s i + 1 p + 1 β i 1 + s i + 1 p + 1 p i β i 2 ,
where β i 1 = p i 2 p i 1 p i p + 1 p + 1 p P i + 1 1 p i 2 p i p p i + 1 p i , and β i 2 = ( 2 p i 1 + 1 ) p i 2 p + 1 p + 1 p p i + 1 1 p i ( 2 p i + 2 ) p p i + 1 p i α i + 1 ( p + 1 ) ( p i 1 ) p i .
By substituting (38) and (39) and the control law (37) into (36), it is rewritten as
V ˙ i j = 1 i k j s j p p j + 2 + s i + 1 p + 1 β i 1 + s i + 1 p + 1 p i β i 2 + j = 1 i e j p p j + 1 d ˙ j j = 1 i ( λ j ( i j + 1 ) c j ) e j p p j + 2 .
STEP n. In particular, the actual controller u ( t ) that we truly need can be found at the last step. s ˙ n can be expressed as
s ˙ n = x ˙ n α ˙ n = x n + 1 p n + ϕ n ( x ¯ n ) + d n j = 0 n 1 α i ν ( t ) ( j ) ν ( t ) ( j + 1 ) j = 0 n 1 α i y d ( j ) y d ( j + 1 ) j = 1 n 1 α i x j x j + 1 p j + ϕ j ( x ¯ j ) + d j j = 1 n 1 α i d ^ j λ j e j ,
and the Lyapunov function V n also can be chosen as
V n = V n 1 + s n p p n + 2 p p n + 2 + e n p p n + 2 p p n + 2 .
Through the same process above, we select
u ( t ) = x n + 1 = α n + 1 = [ ( a n + a ^ n + k n ) s n s n p n β n 1 , 1 s n p ¯ n β n 1 , 2 ϕ n x ¯ n d ^ n + j = 0 n 1 α n ν ( t ) ( j ) ν ( t ) ( j + 1 ) + j = 0 n 1 α n y d ( j ) y d ( j + 1 ) + j = 1 n 1 α n x j x j + 1 p j + ϕ j ( x ¯ j ) + d ^ j + j = 1 n 1 α n d ^ j λ j e j ] 1 p n ,
and the derivative of V n can be found straightforwardly:
V ˙ n j = 1 n k j s j p p j + 2 + j = 1 n e j p p j + 1 d ˙ j j = 1 n ( λ j ( n j + 1 ) c j ) e j p p j + 2 .
Remark 5. 
In the whole process above, in order to counteract the crossing terms consisting of the coupling among disturbances, system states, and compensation errors, two sets of auxiliary terms, a i and a ^ i , are constructed and introduced into both the virtual laws and actual control input of the back-stepping control design.

3.2. Stability Analysis

So far, the design of a back-stepping control protocol has been achieved. The two conclusions areas follows.
Theorem 1. 
Consider the control developed by observer error system (9), corresponding system (11), and controller (41) under Assumptions 1 and 2. The closed-loop system (1) is input-to-state stable ( ISS ) .
Proof of Theorem 1. 
Choose the Lyapunov function as
V n = i = 1 n s j p p j + 2 p p j + 2 + i = 1 n e j p p j + 2 p p j + 2 ;
according to former work (42), one has
V ˙ n j = 1 n k j s j p p j + 2 + j = 1 n e j p p j + 1 d ˙ j j = 1 n ( λ j ( n j + 1 ) c j ) e j p p j + 2 .
In order to facilitate the following theoretical analysis, we select a constant σ , 0 < σ < 1 , and let λ j = μ j + ( n j + 1 ) c j , μ j > 0 , μ ^ = min { μ 1 , , μ n } . Then, (43) is rewritten as
V ˙ n j = 1 n k j s j p p j + 2 + j = 1 n e j p p j + 1 d ˙ j j = 1 n μ i e j p p j + 2 j = 1 n k j s j p p j + 2 + d ˙ e p p j + 1 μ ^ e p p j + 2 = j = 1 n k j s j p p j + 2 + d ˙ e p p j + 1 σ μ ^ e p p j + 2 ( 1 σ ) μ ^ e p p j + 2 ,
where e = ( e 1 , , e n ) T , d ˙ = ( d ˙ 1 , , d ˙ n ) T .
Consider (44). It is plain that when e d ˙ μ ^ σ , one has
V ˙ n j = 1 n k j s j p p j + 2 ( 1 σ ) μ ^ e p p j + 2 j = 1 n k j s j p p j + 2 ( 1 σ ) μ ^ e 2 .
Therefore, according to Lemma 1, regarding e and d ˙ as state and input, respectively, the closed-loop system is input-to-state stable. Furthermore, it follows that s i , e i are uniformly ultimately bounded [36]. □

3.3. Prescribed Performance and Convergence Analysis

Next, we discuss the asymptotical output tracking of system (1) with disturbances and the prescribed performance control.
Theorem 2. 
Under Assumptions 1 and 2, consider the nonlinear system (1) with disturbance observer (7) and composite controller (41). Then, the following three control objectives are achieved:
(i) 
the disturbance estimation error e i asymptotically converge to zero;
(ii) 
the tracking error E ( t ) satisfies lim t E ( t ) = 0 ;
(iii) 
the prescribed performance (3) is guaranteed.
Proof of Theorem 2. 
According to Theorem 1, regarding d ˙ i ( t ) as the control input to system (1), with the help of the second condition of Assumption 1 and Lemma 2, the states satisfy
  • lim t s i ( t ) = 0 , lim t e i ( t ) = 0 , which implies that
lim t s 1 ( t ) = lim t 1 2 ln T ( χ ( t ) ) + δ ̲ δ ¯ T ( χ ( t ) ) 1 2 ln δ ̲ δ ¯ = 0 .
Then, we have lim t T ( χ ( t ) ) = 0 ; therefore,
lim t E ( t ) = lim t ν ( t ) T ( χ ( t ) ) = 0 .
In addition, since s 1 ( t ) is bounded, χ ( t ) is bounded. According to the properties of transforming function T ( χ ( t ) ) in Remark 2, δ ̲ < T ( χ ( t ) ) < δ ¯ , which means that δ ̲ ν ( t ) < E ( t ) < δ ¯ ν ( t ) . Thus, the tracking error E ( t ) with prescribed error performance (3) is achieved. □
Remark 6. 
The control protocol proposed in this paper can achieve a global results for any initial conditions, and also can satisfy any performance constraints about the speed of convergence, the steady-state error, and the overshoot, which are various in practical engineering applications.
Remark 7. 
In the proposed control protocol, the selection of the parameters δ ̲ , ν ( 0 ) and δ ¯ should be proper to guarantee the initial conditions of prescribed performance δ ̲ ν ( 0 ) < E ( 0 ) < δ ¯ ν ( 0 ) . For instance, a large χ ( t ) will lead the tracking error E ( t ) to be close to its boundary, which causes a large control input u ( t ) . However, this situation may be too strict to fit the limitations of the hardware. Reselecting the parameters δ ̲ , ν ( 0 ) and δ ¯ may be a practicable solution.

4. Simulation

In order to show the practical effectiveness of the design protocol proposed in this paper, we applied it to the following second-order nonlinear system as an application and illustration:
x 1 ˙ ( t ) = x 2 p 1 ( t ) + d 1 ( t ) , x 2 ˙ ( t ) = u p 2 ( t ) 49 10 sin x 1 ( t ) + d 2 ( t ) , y ( t ) = x 1 ( t ) ,
where p 1 = 1 , p 2 = 3 , p = 3 . Our objective was to track the expected signal y d ( t ) .
In this case, consider y d ( t ) = 0.3 sin ( t ) + 0.2 cos ( 0.5 t ) , and the disturbances are given as
d 1 ( t ) = 0 , 0 < t < 10 cos ( t ) , 10 t < 25 , 1.8 , t 25
d 2 ( t ) = 0 , 0 < t < 10 0.2 sin ( t ) , 10 t < 25 . 1 , t 25
In addition, let the initial condition x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 . Additionally, we selected the parameters t 0 = 1 , ν 0 = 2 , ν = 0.1 , ρ = 0.1 , δ ̲ = 1 , and δ ¯ = 2 ; and k 1 = 1 , k 2 = 1 , c 1 = 2 , c 2 = 2 , λ 1 = 5 , and λ 2 = 22 of the prescribed performance function and the composite controller, respectively.
Figure 1 shows the simulation results. Firstly, as Figure 1a shows, the prescribed performance of tracking error E ( t ) can be confirmed, and disturbances have been rejected superbly by the controller, which shows the effectiveness of the proposed control protocol. Secondly, Figure 1b presents the curves of the output y ( t ) , the given signal y d ( t ) , and the state of x 2 ( t ) , which indicates that x 2 is bounded and y ( t ) can fit y d ( t ) completely in less than 5 s. Thirdly, The curve of input u ( t ) can be seen in Figure 1c, which is also bounded. It is noted that when there are large fluctuations in disturbances from t = 10 to t = 25 , u ( t ) also changes considerably at t = 25 and u ( t ) . Lastly, Figure 1d ensures the effectiveness of disturbance observer by showing that d 1 ( t ) and d 2 ( t ) can be well estimated. Therefore, it is obvious that the proposed composite controller achieves all the control objectives, which proves that it has good tracking control and anti-disturbance performance.
Remark 8. 
It is worth noting that the first-order case of above example can be applied to the single-link robot dynamic equation as a engineering application. The single-link robot dynamic equation proposed by Ho et al. [39] can be described as
M q ¨ + 1 2 m g L sin q = u , y = q ,
where m , L , and q are the mass, the length, and the angle of the link; M = 1 and g = 9.8 m / s denote the moment of inertia and the gravity coefficient, respectively; u is the controlling torque. Let q and q ¨ be x 1 ( t ) and x 2 ( t ) ; d 1 ( t ) and d 2 ( t ) are unknown external disturbances. (10) can be written as
x 1 ˙ ( t ) = x 2 ( t ) + d 1 ( t ) , x 2 ˙ ( t ) = u ( t ) 49 10 m L sin x 1 ( t ) + d 2 ( t ) , y ( t ) = x 1 ( t ) .
Let m = L = 1 , (47) is the first-order case of (45) as
x 1 ˙ ( t ) = x 2 ( t ) + d 1 ( t ) , x 2 ˙ ( t ) = u ( t ) 49 10 sin x 1 ( t ) + d 2 ( t ) , y ( t ) = x 1 ( t ) .

5. Conclusions

In this article, the prescribed performance tracking control and anti-disturbance control problems have been solved for a class of high-order, strict-feedback systems with external disturbances. With the help of the PPC method, the DOB technique, the back-stepping method, and the technique of adding a power integrator, a novel composite controller was developed to guarantee that all states in the closed-loop system are stable and the tracking error maintains the prescribed performance throughout the evolution. In addition, the output tracking error converges to zero when the disturbances satisfy a weak assumption of boundedness. At last, a numerical simulation was presented to show the effectiveness of the theoretical result.

Author Contributions

Conceptualization, X.T. and H.J.; methodology, X.T.; software, X.T.; validation, X.T. and H.J.; writing—original draft preparation, X.T.; writing—review and editing, X.T.; supervision, H.J.; project administration, H.J.; funding acquisition, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported jointly by National Natural Science Foundation of China (62163035), by the Key Project of Natural Science Foundation of Xinjiang (2021D01D10), by Xinjiang Key Laboratory of Applied Mathematics (XJDX1401), and by the Special Project for Local Science and Technology Development Guided by the Central Government (ZYYD2022A05).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Kim, B.; Calise, A. Nonlinear flight control using neural networks. J. Guid. Control Dynam. 1997, 20, 26–33. [Google Scholar] [CrossRef]
  2. Nijmeijer, H.; Van der Schaft, A. Nonlinear Dynamical Control Systems, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  3. Liu, C.; Liu, X.; Wang, H.; Lu, S.; Zhou, Y. Adaptive control and application for nonlinear systems with input nonlinearities and unknown virtual control coefficients. IEEE Trans. Cybern. 2022, 52, 8804–8817. [Google Scholar] [CrossRef] [PubMed]
  4. Oishi, Y.; Sakamoto, N. Optimal Sampled-Data Control of a Nonlinear System. arXiv 2021, arXiv:2112.145072021. [Google Scholar]
  5. Kawan, C.; Mironchenko, A.; Swikir, A.; Noroozi, N.; Zamani, M. A Lyapunov-based small-gain theorem for infinite networks. IEEE Trans. Autom. Control 2021, 66, 5830–5844. [Google Scholar] [CrossRef]
  6. Sontag, E. Stabilization implies coprime factorization. IEEE Trans. Autom. Control 1989, 34, 435–443. [Google Scholar] [CrossRef] [Green Version]
  7. Yu, P.; Qi, D.; Sun, Y.; Wan, F. Stability analysis of impulsive stochastic delayed Cohen-Grossberg neural networks driven by Levy noise. Appl. Math. Comput. 2022, 434, 127444. [Google Scholar] [CrossRef]
  8. He, D.; Huang, H. Input-to-state stability of efficient robust H MPC scheme for nonlinear systems. Inf. Sci. 2015, 292, 111–124. [Google Scholar] [CrossRef]
  9. Lin, Z.; Liu, Z.; Zhang, Y.; Philip Chen, C. Adaptive neural inverse optimal tracking control for uncertain multi-agent systems. Inf. Sci. 2022, 584, 31–49. [Google Scholar] [CrossRef]
  10. Pu, Z.; Rao, R. LMI-based criterion on stochastic ISS property of delayed high-order neural networks with explicit gain function and simply event-triggered mechanism. Neurocomputing 2020, 377, 57–63. [Google Scholar] [CrossRef]
  11. Nekhoroshikh, A.; Eflmov, D.; Fridman, E.; Perruquetti, W.; Furtat, L.; Polyakov, A. Practical fixed-time ISS of neutral time-delay systems with application to stabilization by using delays. Automatica 2022, 143, 110455. [Google Scholar] [CrossRef]
  12. Mancilla-Aguilar, J.; Haimovich, H. (Integral-)ISS of switched and time-varying impulsive systems based on global state weak linearization. IEEE Trans. Autom. Control 2021, 67, 6918–6925. [Google Scholar] [CrossRef]
  13. Gao, L.; Liu, Z.; Wang, S.; Qu, M.; Zhang, M. Input-to-state stability for discrete hybrid time-delay systems with admissible edge-dependent average dwell time. J. Franklin Inst. 2021. [Google Scholar] [CrossRef]
  14. Gong, Y.; Guo, Y.; Ma, G.; Ran, G.; Li, D. Predefined-time tracking control for high-order nonlinear systems with control saturation. Int. J. Robust Nonlinear Control 2022, 32, 6218–6235. [Google Scholar] [CrossRef]
  15. Zhang, X.; Wang, Y.; Cheng, D. Output tracking of Boolean control networks. IEEE Trans. Autom. Control 2019, 65, 2730–2735. [Google Scholar] [CrossRef]
  16. Wu, C.; Pan, W.; Sun, G.; Liu, J.; Wu, L. Learning tracking control for cyber-physical systems. IEEE Internet Things J. 2021, 8, 9151–9163. [Google Scholar] [CrossRef]
  17. Yu, Z.; Yu, S.; Jiang, H.; Hu, C. Distributed consensus for multi-agent systems via adaptive sliding mode control. Int. J. Robust Nonlinear Control 2021, 31, 7125–7151. [Google Scholar] [CrossRef]
  18. Zhao, Y.; Liu, Y.; Ma, D. Output regulation for switched systems with multiple disturbances. IEEE Trans. Circuits Syst. Regul. Pap. 2020, 67, 5326–5335. [Google Scholar] [CrossRef]
  19. Liu, S.; Feng, J.; Wang, Q.; Song, W. Adaptive consensus control for a class of nonlinear multi-agent systems with unknown time delays and external disturbances. Trans. Inst. Meas. Control 2022, 44, 2063–2075. [Google Scholar] [CrossRef]
  20. Huang, J.; Chen, Z. A general framework for tackling the output regulation problem. IEEE Trans. Autom. Control 2004, 49, 2203–2218. [Google Scholar] [CrossRef]
  21. Back, J.; Shim, H. Adding robustness to nominal output-feedback controllers for uncertain nonlinear systems: A nonlinear version of disturbance observer. Automatica 2008, 44, 2528–2537. [Google Scholar] [CrossRef]
  22. Wang, W.; Guo, P.; Hu, C.; Zhu, L. High-performance control of fast tool servos with robust disturbance observer and modified H control. Mechatronics 2022, 84, 102781. [Google Scholar]
  23. Santina, C.; Turby, R.; Rus, D. Data-driven disturbance observers for estimating external forces on soft robots. IEEE Robot. Autom. Lett. 2020, 5, 5717–5724. [Google Scholar] [CrossRef]
  24. Zhang, J.; Chen, D.; Shen, G.; Sun, Z.; Xia, Y. Disturbance observer based adaptive fuzzy sliding mode control: A dynamic sliding surface approach. Automatica 2021, 129, 109606. [Google Scholar] [CrossRef]
  25. Zhang, W.; Wei, W. Disturbance-observer-based finite-time adaptive fuzzy control for non-triangular switched nonlinear systems with input saturation. Inf. Sci. 2021, 561, 152–167. [Google Scholar] [CrossRef]
  26. Krstic, M.; Kokotovic, P.; Kanellakopoulos, I. Nonlinear and Adaptive Control Design; John Wiley & Sons, Inc.: New York, NY, USA, 1995. [Google Scholar]
  27. Wang, J.; Rong, J.; Lu, L. Reduced-order extended state observer based event-triggered sliding mode control for DC-DC buck converter system with parameter perturbation. Asian J. Control 2020, 23, 1591–1601. [Google Scholar] [CrossRef]
  28. Bechlioulis, C.; Rovithakis, G. Prescribed performance adaptive control of SISO feedback linearizable systems with disturbances. In Proceedings of the 2008 16th Mediterranean Conference on Control and Automation, Ajaccio, France, 25–27 June 2008. [Google Scholar]
  29. Bechlioulis, C.; Rovithakis, G. Robust adaptive control of feedback linearizable MIMO nonlinear systems with prescribed performance. IEEE Trans. Autom. Control 2008, 53, 2090–2099. [Google Scholar] [CrossRef]
  30. Fu, D.; Yin, H.; Huang, J. Controlling an uncertain mobile robot with prescribed performance. Nonlinear Dyn. 2021, 5, 2347–2362. [Google Scholar] [CrossRef]
  31. Bai, W.; Wang, H. Robust adaptive fault-tolerant tracking control for a class of high-order nonlinear system with finite-time prescribed performance. Int. J. Robust Nonlinear Control 2020, 30, 4708–4725. [Google Scholar] [CrossRef]
  32. Chen, L.; Yang, H. Adaptive neural prescribed performance output feedback control of pure feedback nonlinear systems using disturbance observer. Int. J. Adapt. Control 2020, 34, 520–542. [Google Scholar] [CrossRef]
  33. Huang, Y.; Lin, S.; Liu, X. H synchronization and robust H synchronization of coupled neural networks with non-identical nodes. Neural Process. Lett. 2021, 53, 3467–3496. [Google Scholar] [CrossRef]
  34. Gao, F.; Chen, W. Disturbance rejection in singular time-delay systems with external disturbances. Int. J. Control Autom. 2022, 20, 1841–1848. [Google Scholar] [CrossRef]
  35. Chen, F.; Dimarogonas, D. Leader-follower formation control with prescribed performance guarantees. IEEE Trans. Control Netw. 2020, 8, 450–461. [Google Scholar] [CrossRef]
  36. Vidyasagar, M. Nonlinear Systems Analysis; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2002. [Google Scholar]
  37. Yang, B.; Lin, W. Homogeneous observers, iterative design and global stabilization of high-order nonlinear systems by smooth output feedback. IEEE Trans. Autom. Control 2004, 49, 1069–1080. [Google Scholar] [CrossRef]
  38. Qian, C.; Lin, W. Non-lipschitz continuous stabilizers for nonlinear systems with uncontrollable unstable linearization. Syst. Control Lett. 2001, 42, 185–200. [Google Scholar] [CrossRef]
  39. Ho, H.; Wong, Y.; Rad, A. Adaptive fuzzy approach for a class of uncertain nonlinear systems in strict-feedback form. ISA Trans. 2008, 47, 286–299. [Google Scholar] [CrossRef]
Figure 1. Response curves of system (45).
Figure 1. Response curves of system (45).
Entropy 25 00103 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, X.; Jiang, H. Prescribed Performance Back-Stepping Tracking Control for a Class of High-Order Nonlinear Systems via a Disturbance Observer. Entropy 2023, 25, 103. https://doi.org/10.3390/e25010103

AMA Style

Tang X, Jiang H. Prescribed Performance Back-Stepping Tracking Control for a Class of High-Order Nonlinear Systems via a Disturbance Observer. Entropy. 2023; 25(1):103. https://doi.org/10.3390/e25010103

Chicago/Turabian Style

Tang, Xinrui, and Haijun Jiang. 2023. "Prescribed Performance Back-Stepping Tracking Control for a Class of High-Order Nonlinear Systems via a Disturbance Observer" Entropy 25, no. 1: 103. https://doi.org/10.3390/e25010103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop