Next Article in Journal
A CAD-Based Method for 3D Scanning Path Planning and Pose Control
Previous Article in Journal
Fuel Grain Configuration Adaptation for High-Regression-Rate Hybrid Propulsion Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Backstepping-Based Finite-Horizon Optimization for Pitching Attitude Control of Aircraft

by
Ang Li
1,*,
Yaohua Shen
2 and
Bin Du
3
1
Shenyang Aircraft Design and Research Institute, Yangzhou Collaborative Innovation Research Institute Co., Ltd., Yangzhou 225006, China
2
College of Automation, Jiangsu University of Science and Technology, Zhenjiang 212100, China
3
College of Automation Engineering, Nanjing University and Aeronautics and Astronautics, Nanjing211106, China
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(8), 653; https://doi.org/10.3390/aerospace12080653
Submission received: 2 April 2025 / Revised: 27 June 2025 / Accepted: 4 July 2025 / Published: 23 July 2025
(This article belongs to the Section Aeronautics)

Abstract

In this paper, the problem of pitching attitude finite-horizon optimization for aircraft is posed with system uncertainties, external disturbances, and input constraints. First, a neural network (NN) and a nonlinear disturbance observer (NDO) are employed to estimate the value of system uncertainties and external disturbances. Taking input constraints into account, an auxiliary system is designed to compensate for the constrained input. Subsequently, the backstepping control containing NN and NDO is used to ensure the stability of systems and suppress the adverse effects caused by the system uncertainties and external disturbances. In order to avoid the derivation operation in the process of backstepping, a dynamic surface control (DSC) technique is utilized. Simultaneously, the estimations of the NN and NDO are applied to derive the backstepping control law. For the purpose of achieving finite-horizon optimization for pitching attitude control, an adaptive method termed adaptive dynamic programming (ADP) with a single NN-termed critic is applied to obtain the optimal control. Time-varying feature functions are applied to construct the critic NN in order to approximate the value function in the Hamilton–Jacobi–Bellman (HJB) equation. Furthermore, a supplementary term is added to the weight update law to minimize the terminal constraint. Lyapunov stability theory is used to prove that the signals in the control system are uniformly ultimately bounded (UUB). Finally, simulation results illustrate the effectiveness of the proposed finite-horizon optimal attitude control method.

1. Introduction

In flight control, the problem of finite-horizon optimization for pitching attitude tracking control of aircraft can be treated as the pitching attitude tracking the command signal with the desired finite-horizon optimal index. To resolve the above problem, both the tracking control and finite-horizon optimization should be taken into consideration, which makes the overall problem significantly difficult.
In order to achieve attitude tracking for aircraft, much research has been conducted. In [1], an adaptive second order sliding model control method was presented in order to improve the performance in the presence of external disturbances. In [2], a gain-scheduling control method was proposed for linear parameter-varying systems with multi-input multi-output in order to obtain a satisfactory performance in the bank-to-turn control of aircraft. In [3], nonlinear dynamic inversion technology was proposed for the design of a supermaneuverable aircraft control. In [4], backstepping control was employed to execute the attitude tracking control of a mini unmanned aerial vehicle.
Among the control methods above, the backstepping control scheme is widely used owing to some advantages. First, virtual control is designed separately for each subsystem in the process of backstepping control to reduce the complexity of high-order system control. Subsequently, the backstepping control can be combined with other control methods such as sliding mode control, NN, adaptive control method, and disturbance observer to improve the control performance. In [5], a robust backstepping control scheme combining sliding mode control and neural network (NN) was proposed to achieve the reentry attitude tracking control of a near-space hypersonic vehicle in the presence of parameter variations and external disturbances. In [6], a deep convolutional NN-based backstepping method was used to identify system uncertainties and hidden states in attitude control in order to enhance the robustness. In [7], an auxiliary system-based backstepping control was constructed for the aircraft subject to the input saturation problem caused by wing rock. In [8], a finite-time convergence backstepping control scheme was designed. In the scheme, a finite-time observer and finite-time auxiliary system were used to suppress the effects of unsteady aerodynamic disturbances and compensate for the effect of input saturation, respectively. However, the aforementioned attitude control methods do not take into account the optimal control that meets some desired index. Actually, the optimal control problem is intractable, especially for a nonlinear system. Hence, there is relatively little research on the attitude control of aircraft in an optimal way, which is a nonlinear optimization problem in nature, not to mention the finite-horizon optimization for attitude tracking control. Thus, how to control the aircraft attitude in an optimal way should be further studied.
Quadratic optimal control is an optimal control method applied earlier in flight control. A given quadratic index is used to control the system with a desired optimal performance. In [9], a nonlinear system was divided into two parts of a linear nominal system and compound disturbances. Then, a linear quadratic regulator was designed to control the linear nominal system, while a robust control was derived to compensate for the effects caused by compound disturbances. In order to cope with the problem of recovering open-loop singular values in the quadratic optimal control, the LQG/LTR technique was applied for a multivariable vertical short take-off and landing aircraft linear system in [10]. However, the above control methods can only be applied to linear systems. For a nonlinear system, Hamilton–Jacobi–Bellman (HJB) equations without analytical solutions need to be solved, which makes it intractable to execute optimal control.
To cope with the problem of solving HJB equations for a nonlinear system, some numerical methods were applied to approximate the solution. In [11], a dynamic programming algorithm was presented, which is supposed to be solved in an off-line manner. In [12], a recursive optimization approach was proposed for a nonlinear system. In [13], a state-dependent Riccati equation (SDRE) method, which used a parameterization technique to convert the nonlinear system into a linear structure with state-dependent coefficients, was proposed to deal with the problem of nonlinear optimization. Nevertheless, a heavy computational burden is the main obstacle to applying the above three methods to nonlinear optimization. Inspired by the dynamic programming algorithm, the ADP algorithm was proposed in [14]. Compared with the dynamic programming algorithm, a critic NN was constructed to approximate the value function in order to solve the HJB equation forward-in-time in the ADP algorithm. Thus, the heavy computational burden was avoided, and on-line optimization was achieved.
The ADP algorithm, characterized by strong abilities of self-learning and adaptivity, has received significantly increased attention and has become an important intelligent optimal control method for nonlinear systems [15]. Due to its advantage of a low calculation cost, ADP has been applied in flight control. In [16], an adaptive critic design (ACD)-based optimal control algorithm was proposed. Under the premise of ensuring system stability, the ACD algorithm was utilized to improve the control performance of the system. In [17], a constrained ADP approach and linear parameter-varying technique were employed to guarantee the closed-loop stability and excellent control performance of the flight with aerodynamic parameter uncertainties and actuator failures. In [18], an incremental ADP algorithm was proposed to control the attitude tracking of spacecraft. In [19], an integral sliding-mode control based adaptive actor–critic algorithm was developed to guarantee the optimal control for sliding-mode dynamics online. As discussed above, the backstepping control method is favored by researchers due to its advantages. Because of its feature of easy combination with other control methods, the backstepping-based ADP scheme has been applied in many works. In [20], a backstepping-based ADP algorithm was developed to solve the problem of missile-target interception with state and input constraints. In the scheme of backstepping, a barrier Lyapunov function was used in the virtual controller design process for each subsystem to guarantee the state constraints, and an auxiliary system was designed to compensate for the constrained input. In [21], a backstepping-based ADP algorithm with zero-sum differential game method was applied to the zero-sum game problem for a missile and target. The zero-sum differential game technique was applied in the scheme of ADP algorithm in order to control the missile and target in an optimal way, and a critic network was constructed to approximate the value function in Hamilton–Jacobi–Isaacs (HJI) in order to achieve optimization online. In [22], an NN-based optimal control scheme was proposed for the near-space vehicle attitude tracking control. In the scheme, the NN and NDO were designed in a backstepping scheme to approximate the system uncertainties and external disturbances, while the critic network was constructed to approximate the value function in the HJB equation. However, it should be noted that the developments of the ADP algorithm above mainly address only the problem of infinite-horizon optimization. In fact, it is required to control in a finite-horizon optimal way for many systems, especially for flight systems.
Compared with infinite-horizon optimal control, finite-horizon optimal control is considered to be more challenging. First, the value function of the finite-horizon optimal control system is time-to-go-dependent, which leads to a time-varying associated HJB equation. Hence, it is more difficult to solve the HJB equation. Second, the terminal constraint should be taken into account for infinite-horizon optimization [23]. For the purpose of addressing the issues above, some research has been conducted. In [24], time-dependent weights and state-dependent feature functions were incorporated to construct an NN in order to approximate the time-to-go-dependent value function, and the least-square-based gradient descent method was utilized to update the weights off-line. In contrast, an NN consisting of constant weights and time-state-dependent feature functions was designed to achieve the approximation of the value function in the HJB function online in [25]. Nevertheless, the constrained input and system uncertainties were not taken into account. Considering input constraints, a non-quadratic function was utilized to eliminate the input constraints in [26]. Regarding system uncertainties, an online NN identifier was designed to approximate system uncertainties, and an actor–critic algorithm was introduced to solve the HJB equation to guarantee that the system was controlled with the optimal index in finite-horizon in [27]. Unfortunately, most of the research considered discrete systems. Furthermore, the constrained input, system uncertainties, and external disturbances are not considered in the finite-horizon optimal control together, which limits its application in flight control.
In order to address the problem of finite-horizon optimization pitching attitude tracking with system uncertainties, external disturbances, and input constraints, a novel backstepping-based finite-horizon optimization is developed in this work. The backstepping scheme, in which NN and NDO are employed to estimate the value of system uncertainties and external disturbances and an auxiliary system is designed to compensate for the constrainted input, is introduced to ensure the stability of systems. The ADP algorithm with a critic NN that consists of constant weights and time-state-dependent feature functions is employed to obtain finite-horizon optimal control. A novel updating law of the critic NN weights is derived to solve the HJB equation and minimize the terminal constraints. Furthermore, the Lyapunov stability method is applied to prove that the signals in the control system are UUB. Finally, simulation results illustrate the effectiveness of the proposed control scheme.
The main contributions of this paper include the following:
(1)
A backstepping-based ADP scheme is used to achieve finite-horizon optimal control. In the backstepping control scheme, NN is applied to approximate system uncertainties, while NDO is employed to estimate external disturbances. The ADP is used to control the nominal system in a finite-horizon optimal manner. Due to the integration of the backstepping method and the advantages of ADP, the backstepping-based finite-horizon optimization ADP scheme is promising for pitching attitude tracking control.
(2)
A novel updating law of the critic NN weights is derived in order to satisfy the terminal constraints, relaxing the requirement of an initial admissible control and guarantee the stability of system.
The rest of the paper is organized as follows. Section 2 formally states the preliminaries of the research object of this paper. The desigsn for backstepping control and finite-horizon optimal control are given in Section 3 and Section 4, respectively. Then, the stability analysis is developed in Section 5. The simulation results are presented in Section 6. Finally, the conclusions of the paper are given in the last section.
Notations. Throughout the paper, R m × n stands for the set of all m × n real matrices. x f stands for the gradient of f with respect to x such as x f = f x . s i g n ( · ) stands for the sign function.

2. Problem Descriptions and Preliminaries

Before giving the attitude dynamics of an aircraft, a diagram illustrating the model parameters of the aircraft is given in Table 1. Regardless of the unsteady aerodynamics, the longitudinal attitude dynamics of an aircraft with system uncertainties and external disturbances can be described as follows [7].
α ˙ = f 1 ( α ) + q + Δ f 1 ( α ) + d 1 ,
q ˙ = f 2 ( α , q ) + g 2 δ z ( u ) + Δ f 2 ( α , q ) + d 2 ,
where f 1 ( α ) R 1 and f 2 ( α , q ) R 1 are the known internal system dynamics. g 2 R 1 is the known control coefficient. u R 1 is the unconstrainted control to be designed. Δ f 1 ( α ) R 1 and Δ f 2 ( α , q ) R 1 are the unknown system uncertainties. d 1 R 1 and d 2 R 1 are the unknown external disturbances. f 1 ( α ) , f 2 ( α , q ) , g 2 , δ z ( u ) are given by
f 1 ( α ) = 1 M V ( L T sin α + M g cos γ ) ,
f 2 ( α , q ) = q ¯ S c ¯ C m I y y ,
g 2 = 1 I y y x T T π 180 ,
δ z = u M s i g n ( u ) , u u M u , u < u M ,
where u M is a known boundary of u.
In this paper, the control objective is to design a controller u, so that the angle of attack α is driven to track a desired signal α c in a finite-horizon optimal manner, and all the signals in the control system (1) and (2) are uniformly ultimately bounded (UUB). To illustrate the design of the proposed method, the control block diagram is shown in Figure 1. To deal with the external disturbances and the system uncertainties, the NDO and the NN are designed together with the backstepping control. To solve the problem of the input constraints, the auxiliary is used in the forward control, which transforms the longitudinal attitude dynamics of the aircraft into the nominal form. To carry out optimal control, the ADP-based finite-horizon optimal control method is designed with the critic NN.
For the design of the controller hereinafter, the following assumption is required.
Assumption 1. 
In Equations (1) and (2), the system uncertainties Δ f 1 ( α ) and Δ f 2 ( α , q ) as well as the external disturbances d 1 and d 2 are differentiable. In addition, the first derivatives of d 1 and d 2 are bounded such as d 1 ˙ d ˙ 1 M . Furthermore, g 2 is invertible and bounded such as 0 < g 2 g 2 M .
Remark 1. 
Investigating the expression of g 2 shown as Equation (5), the boundeness of I y y , x T , and T yield the boundness of g 2 , and the non-zero values of x T and T make g 2 invertible. Thus, Assumption 1 is reasonable.

3. Design for Backstepping Control

In this section, a backstepping method with NN and NDO is derived to design a forward controller. The NN is constructed to approximate the system uncertainties while the NDO is designed to approximate the external disturbances. Then, the system comprising (1) and (2) is transformed into a nominal system to be controlled in a finite-horizon optimal manner by ADP.
In order to obtain satisfying control performance, the negative effects caused by unknown system uncertainties must be eliminated. According to the NN theory, the system uncertainties Δ f 1 ( α ) and Δ f 2 ( α , q ) can be approximated as [22]
Δ f 1 ( α ) = L 1 1 ( W 1 * T a 1 ( α ) + r 1 * ) ,
Δ f 2 ( α , q ) = L 2 1 ( W 2 * T a 2 ( α , q ) + r 2 * ) ,
where L 1 R 1 , L 2 R 1 are the parameters to be designed. a 1 ( α ) R n 1 and a 2 ( α , q ) R n 2 are basis functions. n 1 and n 2 are the numbers of the basis functions of a 1 ( α ) and a 2 ( α , q ) , respectively. W 1 * R n 1 and W 2 * R n 2 are the desired weight vectors. r 1 * and r 2 * are the approximation errors of the NN.
Assumption 2 
([20]). W 1 * and W 2 * are both bounded as W 1 * W 1 M * , W 2 * W 2 M * .
Invoking Equations (7) and (8) into (1) and (2) yields
α ˙ = f 1 ( α ) + q + L 1 1 W 1 * T a 1 ( α ) + D 1 ,
q ˙ = f 2 ( α , q ) + g 2 δ z ( u ) + L 2 1 W 2 * T a 2 ( α , q ) + D 2 ,
where D 1 = L 1 1 r 1 * + d 1 R 1 and D 2 = L 2 1 r 2 * + d 2 R 1 are treated as compound disturbances [22].
For the purpose of compensating for input constraints, an auxiliary system is designed as
S ˙ 2 = k a u x S 2 + g 2 Δ ,
where Δ = δ z ( u ) u . S 2 R 1 is an auxiliary control signal. k a u x is the parameter to be designed.
Assumption 3 
([8]). Δ is bounded such as Δ Δ ¯ .
The error system is defined as [20,22]
z 1 = α α c ,
z 2 = q q c S 2 ,
where q c R 1 is the virtual control law to be designed.
Invoking Equations (9)–(11), the dynamics of the error system are derived as
z ˙ 1 = f 1 ( α ) + q + L 1 1 W 1 * T a 1 ( α ) + D 1 α ˙ c ,
z ˙ 2 = f 2 ( α , q ) + g 2 u + L 2 1 W 2 * T a 2 ( α , q ) + D 2 q ˙ c + k a u x S 2 .
For the purpose of achieving the backstepping control and finite-horizon optimal control, both of the virtual control q c and the unconstrained control u are divided into two parts as
q c = q c a + q c * ,
u = u a + u * ,
where q c a and u a are the virtual control input and unconstrained control input in the backstepping scheme, respectively, and q c * and u * are the virtual control input and unconstrained control input in the finite-horizon optimal control scheme, respectively.
To estimate the compound disturbances, NDOs are designed as [22]
D ^ 1 = ς 1 + H 1 ( α ) ς ˙ 1 = L 1 ( f 1 ( α ) + q + D ^ 1 ) W ^ 1 T a 1 ( α ) + z 1 ,
D ^ 2 = ς 2 + H 2 ( q ) ς ˙ 2 = L 2 ( f 2 ( α , q ) + g 2 δ z ( u ) + D ^ 2 ) W ^ 2 T a 2 ( α , q ) + z 2 ,
where D ^ 1 and D ^ 2 are the estimations of D 1 and D 2 , respectively. H 1 ( α ) R 1 and H 2 ( α ) R 1 are functions to be designed that satisfy L 1 = H 1 ( α ) H 1 ( α ) α and L 2 = H 2 ( q ) H 2 ( q ) q . W ^ 1 and W ^ 2 are the estimations of W 1 * and W 2 * , respectively.
We define the estimation errors as
W ˜ 1 = W ^ 1 W 1 *
W ˜ 2 = W ^ 2 W 2 *
D ˜ 1 = D 1 D ^ 1
D ˜ 2 = D 2 D ^ 2 .
Invoking Equations (9), (10), (18), (19), (22), and (23), the dynamics of D ˜ 1 and D ˜ 2 can be written as
D ˜ ˙ 1 = D ˙ 1 L 1 D ˜ 1 + W ^ 1 T a 1 ( α ) z 1
D ˜ ˙ 2 = D ˙ 2 L 2 D ˜ 2 + W ˜ 2 T a 2 ( α , q ) z 2 .
Then, the backstepping control law can be designed as follows.
Step 1: Taking Equation (14) into account, the virtual control input q c a is designed as
q c a = ( k 1 z 1 + f 1 ( α c ) + L 1 1 W ^ 1 T a 1 ( α ) + D ^ 1 α ˙ c ) ,
where k 1 is the parameter to be designed.
The weights vector W ^ 1 is updated as
W ^ ˙ 1 = Ω 1 1 ( a 1 ( α ) z 1 T L 1 1 τ 1 W ^ 1 ) ,
where Ω 1 R n 1 × n 1 is the positive definite symmetric matrice to be designed [22]. τ 1 is the parameter to be designed.
Invoking Equations (13), (14), (16), and (26) yields
z ˙ 1 = f 1 ( α ) f 1 ( α c ) + q c * + z 2 k 1 z 1 L 1 1 W ˜ 1 T a 1 ( α ) + D ˜ 1 + S 2 .
In the normal backstepping scheme, it is inevitable to differentiate q c . Nevertheless, due to the unknown information in the partial derivative of q c , it is intractable to obtain the derivation. In order to avoid the derivation operation, a dynamic surface control (DSC) technique is applied as [28]
τ λ ˙ + λ = q c , λ ( 0 ) = q c ( 0 ) ,
where τ is the parameter to be designed. λ is a first-order filter in nature to approximate q c such that λ ˙ can substitute for q ˙ c .
We define the error as
e = λ q c .
Then, we have
e ˙ = λ ˙ + ( q ˙ c ) = e e τ τ + ( q c α α ˙ q c α c α ˙ c q c z 1 z ˙ 1 q c W ^ 1 W ^ ˙ 1 q c D ^ 1 D ^ ˙ 1 q c α ˙ c α ¨ c q c W ^ c W ^ ˙ c ) = e e τ τ + M d ( z 1 , z 2 , e , W ^ 1 , D ^ 1 , W ^ 2 , α c , α ˙ c , α ¨ c ) ,
where M d ( z 1 , z 2 , e , W ^ 1 , D ^ 1 , α c , α ˙ c , α ¨ c ) = q c α α ˙ q c α c α ˙ c q c z 1 z ˙ 1 q c W ^ 1 W ^ ˙ 1 q c D ^ 1 D ^ ˙ 1 q c α ˙ c α ¨ c q c W ^ c W ^ ˙ c . W ^ c R L is the weight vector of critic NN designed hereinafter.
Assumption 4 
([28]). M ( z 1 , z 2 , e , W ^ 1 , D ^ 1 , α c , α ˙ c , α ¨ c ) is a continuous function. For any C 1 and C 2 , the sets Π 1 : = { ( α c , α ˙ c , α ¨ c ) : α c 2 + α ˙ c 2 + α ¨ c 2 C 1 } and Π 2 : = { j = 1 2 z j 2 + W ˜ 1 T Ω 1 W ˜ 1 + W ˜ c T W ˜ c + e 2 + D ˜ 1 2 + α ¨ c 2 C 2 } are compact in R 3 and R 5 + n 1 + L , respectively. Hence, Π 1 × Π 2 is also compact. Considering the continuous property, the function M d ( · ) is bounded for the given initial conditions in the compact set Π 1 × Π 2 such as M d ( · ) M .
We consider the Lyapunov function candidate as
V 1 = 1 2 z 1 2 + 1 2 e 2 + 1 2 W ˜ 1 T Ω 1 W ˜ 1 + 1 2 D ˜ 1 2
Differentiating V 1 and invoking Equations (20), (24), and (27)–(31) yields
V ˙ 1 = z 1 ( f 1 ( α ) f 1 ( α c ) + q c * ) + z 1 z 2 k 1 z 1 2 e 2 τ + e ( q ˙ c ) + z 1 S 2 τ 1 W ˜ 1 T W ^ 1 + D ˜ 1 D ˙ 1 L 1 D ˜ 1 2 + D ˜ 1 W ^ 1 T a 1 ( α ) .
In addition, we have
W ˜ 1 T W ^ 1 1 2 W 1 ˜ 2 1 2 W 1 * 2 .
Taking Assumption 3.3 and Young’s inequality into account and invoking inequality (34) yields
V ˙ 1 z 1 ( f 1 ( α ) f 1 ( α c ) + q c * ) ( k 1 1 ) z 1 2 + 1 2 z 2 2 ( 1 τ 1 2 ) e 2 + 1 2 S 2 2 1 2 ( τ 1 ι 1 1 ) W 1 ˜ 2 ( L 1 1 2 1 2 ι 1 a 1 M 2 ) D ˜ 1 2 + 1 2 D ˙ 1 2 + 1 2 τ 1 W 1 * 2 + 1 2 M 2 ,
where a 1 ( α ) a 1 M . ι 1 is the parameter to be designed.
Step 2: Taking Equation (15) into account, the control input u a is designed as
u a = g 2 1 ( f 2 ( α c , q c ) λ ˙ + k 2 z 2 + L 2 1 W ^ 2 T a 2 ( α , q ) + D ^ 2 + k a u x S 2 ) ,
where k 2 is the parameter to be designed.
The weights vector W ^ 2 is updated as
W ^ ˙ 2 = Ω 2 1 ( a 2 ( α , q ) z 2 T L 2 1 τ 2 W ^ 2 ) ,
where Ω 2 R n 2 × n 2 represents the positive definite symmetric matrices to be designed [22]. τ 2 is the parameter to be designed.
Invoking Equations (15), (17), (29)–(31), and (36) yields
z ˙ 2 = f 2 ( α , q ) f 2 ( α c , q c ) + g 2 u * L 2 1 W ˜ 2 T a 2 ( α , q ) + D ˜ 2 k 2 z 2 e τ q ˙ c .
We consider the Lyapunov function candidate as
V 2 = 1 2 z 2 2 + 1 2 W ˜ 2 T Ω 2 W ˜ 2 + 1 2 D ˜ 2 2 + 1 2 S 2 2 .
Differentiating V 2 and invoking Equations (21), (25), (37), and (38) yields
V ˙ 2 = z 2 ( f 2 ( α , q ) f 2 ( α c , q c ) + g 2 u * ) k 2 z 2 2 z 2 e τ + z 2 ( q ˙ c ) τ 2 W ˜ 2 T W ^ 2 + D ˜ 2 D ˙ 2 L 2 D ˜ 2 2 + D ˜ 2 W ˜ 2 T a 2 ( α , q ) k a u x S 2 2 + S 2 g 2 Δ .
In addition, we have
W ˜ 2 T W ^ 2 = 1 2 W 2 ˜ 2 + 1 2 W 2 ^ 2 1 2 W 2 * 2 1 2 W 2 ˜ 2 1 2 W 2 * 2 .
Taking Assumptions 3 and 4 and Young’s inequality into account and invoking inequality (41) yields
V ˙ 2 z 2 ( f 2 ( α , q ) f 2 ( α c , q c ) + g 2 u * ) ( k 2 + 1 2 τ 1 2 ) z 2 2 1 2 τ e 2 ( τ 2 2 1 2 ι 2 1 ) W 2 ˜ 2 ( L 2 1 2 1 2 ι 2 a 2 M 2 ) D ˜ 2 2 ( k a u x 1 2 θ a u x g 2 2 ) S 2 2 + 1 2 D ˙ 2 2 + τ 2 2 W 2 * 2 + 1 2 θ a u x 1 Δ ¯ 2 + 1 2 M 2 ,
where a 2 ( α ) a 2 M . ι 2 , θ a u x are the parameters to be designed.
The nominal affine nonlinear system is defined as
Z ˙ = F ( Z ) + G U ,
where
Z = [ z 1 , z 2 ] T R 2
F ( Z ) = [ f 1 ( α ) f 1 ( α c ) , f 2 ( α , q ) f 2 ( α c , q c ) ] T R 2
G = 1 0 0 g 2 R 2 × 2
U = [ q c * , u * ] T R 2 .
We consider the Lyapunov function candidate V b in the backstepping scheme as
V b = V 1 + V 2 .
Differentiating V b and invoking Equations (35) and (42)–(47) yields
V ˙ b Z T ( F ( Z ) + G U ) ( k 1 1 ) z 1 2 ( k 2 + 1 2 τ 1 2 ) z 2 2 ( 3 2 τ 1 2 ) e 2 ( k a u x 1 2 1 2 θ a u x g 2 2 ) S 2 2 1 2 ( τ 1 ι 1 1 ) W 1 ˜ 2 1 2 ( τ 2 ι 2 1 ) W 2 ˜ 2 ( L 1 1 2 1 2 ι 1 a 1 M 2 ) D ˜ 1 2 ( L 2 1 2 1 2 ι 2 a 2 M 2 ) D ˜ 2 2 + 1 2 D ˙ 1 2 + 1 2 D ˙ 2 2 + 1 2 τ 1 W 1 * 2 + 1 2 τ 2 W 2 * 2 + 1 2 θ a u x 1 Δ ¯ 2 + M 2 .
Remark 2. 
Based on Assumption 1, the first derivatives of D 1 and D 2 are bounded such as D 1 ˙ D ˙ 1 M , D 2 ˙ D ˙ 2 M . In addition, considering the optimal approximation property of the NN, the desired weight vectors are bounded. Thus, Assumption 2 is reasonable. Furthermore, if the difference Δ between the desired control input and saturation input is unbounded, the desirable attitude motion will be uncontrollable. Thus, Assumption 3 is reasonable. In addition, in terms of the compact property and the continuous property, which have been detailed in [28], Assumption 4 is reasonable. Detailed derivations of some equations above is provided in Appendix A.

4. Design for Finite-Horizon Optimal Control

In this section, an ADP based finite-horizon optimal control method is designed to make the nominal system (43) controlled in a finite-horizon optimal manner. In order to approximate the value function in the HJB equation, an NN consisting of constant weights and a time-state-dependent feature function is constructed. A novel weight updating law is proposed in order to minimize the objective function, remove the requirement for the initial admissible control, and guarantee the Lyapunov stability.
The objective of the finite-horizon optimal control is to maximize the finite-horizon cost function defined as
V ( Z , t ) = ψ ( Z ( t f ) , t f ) + t t f r ( Z , U ) d t ,
where ψ ( Z ( t f ) , t f ) is the terminal constraint of the terminal state Z ( t f ) . r ( Z , U ) is the cost-to-go function defined as
r ( Z , U ) = Z T Q Z + U T R U ,
where Q > 0 R 2 × 2 , R > 0 R 2 × 2 are symmetric positive matrices.
Similarly, the terminal cost function is defined as
V ( Z , t f ) = ψ ( Z ( t f ) , t f ) .
Considering Equation (50), the Hamiltonian function of the nominal system (43) is given as
H ( Z , U , V ( Z , t ) ) = t V ( Z , t ) + Z T Q Z + U T R U + Z T V ( Z , t ) ( F ( Z ) + G U ) .
Then, the optimal cost function V * ( Z , t ) satisfies the equation as [20]
min U H ( Z , U , V * ( Z , t ) ) = 0 .
According to Equation (54), the optimal control input U * meets the conditions as
H ( Z , U , V * ( Z , t ) ) U | U = U * = 0 .
Hence, the optimal control input U * can be obtained as
U * = 1 2 R 1 G T Z V * ( Z , t ) .
Invoking Equation (56) into (53) and considering (54) yields
H ( Z , U * , V * ( Z , t ) ) = t V * ( Z , t ) + Z T Q Z + Z T V * ( Z , t ) F ( Z ) 1 4 Z T V * ( Z , t ) G R 1 G T Z V * ( Z , t ) = 0 .
We rewrite the optimal cost function V * ( Z , t ) by NN as
V * ( Z , t ) = W c T b c ( Z , t f t ) + ε ( Z , t ) ,
where b c ( Z , t f t ) R L is the basis functions vector, L is the number of the basis functions. W c R L is the weights vector. ε ( Z , t ) is the approximate error.
Similarly, the terminal optimal cost function V * ( Z , t f ) can be written as
V * ( Z , t f ) = W c T b c ( Z ( t f ) , 0 ) + ε ( Z , t f ) .
The gradients of V * ( Z , t ) with respect to t and Z are
t V * ( Z , t ) = t T b c ( Z , t f t ) W c + t ε ( Z , t ) ,
Z V * ( Z , t ) = Z T b c ( Z , t f t ) W c + Z ε ( Z , t ) .
Assumption 5 
([20,22,25,26,29,30]). W c , ε ( Z , t ) , t ε ( Z , t ) , Z ε ( Z , t ) , b c ( Z , t f t ) , t b c ( Z , t f t ) , and Z b c ( Z , t f t ) are all bounded such as W c W c M , ε ( Z , t f ) ε M , t ε ( Z , t ) ε t M , Z ε ( Z , t ) ε Z M , b c ( Z , t f t ) b M , t b c ( Z , t f t ) b t M , Z b c ( Z , t f t ) b Z M .
Invoking Equation (61) into (56) yields
U * = 1 2 R 1 G T Z T b c ( Z , t f t ) W c 1 2 R 1 G T Z ε ( Z , t ) .
Invoking Equations (60)–(62) into (57) yields
H ( Z , U * , V * ( Z , t ) ) = t T b c ( Z , t f t ) W c + Z T Q Z + W c T Z b c ( Z , t f t ) F ( Z ) 1 4 W c T X W c + ε H J B = 0 ,
where
X = Z b c ( Z , t f t ) G R 1 G T Z T b c ( Z , t f t ) R L × L ,
ε H J B = t ε ( Z , t ) + Z T ε ( Z , t ) ( F ( Z ) + G U * ) + 1 4 Z T ε ( Z , t ) G R 1 G T Z ε ( Z , t ) .
Lemma 1 
([31]). For nominal system (43), it is asymptotically stable under the control as
U = 1 2 ζ R 1 G T Z V * ( Z , t ) ,
where ζ 1 2 .
Assumption 6. 
X and ε H J B are both bounded as X X M and ε H J B ε H J B M .
Since W c is unknown, a critic NN is constructed to approximate the optimal cost function as
V ^ ( Z , t ) = W ^ c T b c ( Z , t f t ) ,
where W ^ c is the estimation of W c .
We define the estimation error
W ˜ c = W c W ^ c .
Invoking Equation (67), the gradients of the optimal cost function with respect to t and Z can be approximated as
t V ^ ( Z , t ) = t T b c ( Z , t f t ) W ^ c ,
Z V ^ ( Z , t ) = Z T b c ( Z , t f t ) W ^ c .
Invoking Equations (56) and (70), the estimation of the optimal control input can be written as
U ^ = 1 2 R 1 G T Z T b c ( Z , t f t ) W ^ c ,
where U ^ = [ q ^ c * , u ^ * ] T R 2 .
Similar to Equation (57), the estimation of the Hamiltonian function can be expressed as
H ^ ( Z , U ^ , V ^ ( Z , t ) ) = t T b c ( Z , t f t ) W ^ c + Z T Q Z + W ^ c T Z b c ( Z , t f t ) F ( Z ) 1 4 W ^ c T X W ^ c = e c .
Then, the optimal terminal cost function can be estimated as
V ^ ( Z , t f ) = W ^ c T b c ( Z ^ ( t f ) , 0 ) ,
where Z ^ ( t f ) is the estimation of Z ( t f ) [25,26,32].
We define the terminal constraints estimation error as
e t f = ψ ( Z ( t f ) , t f ) W ^ c T b c ( Z ^ ( t f ) , 0 ) .
Invoking Equations (72) and (74), a total squared error is defined as
E = 1 2 e c T e c + 1 2 e t f T e t f .
Prior to designing the weight updating law for the critic NN, an assumption is given.
Assumption 7. 
Considering system (43) with the optimal control input (56), we can always find a Lyapunov function J 1 that satisfies J ˙ 1 = Z T J 1 ( F ( Z ) + G U * ) < 0 . Furthermore, there is always a positive function Λ ( Z ) R 2 × 2 that satisfies the following inequality.
Z T J 1 ( F ( Z ) + G U * ) < Z T J 1 Λ ( Z ) Z J 1
In order to minimize the total squared error (75), a novel weight updating law based on the gradient descent theory is developed as
W ^ ˙ c = c 1 β ¯ 1 m s e c c 1 β ¯ 2 m t e t f + 1 2 c 1 Φ Z b c ( Z , t f t ) G R 1 G T Z J 1 + c 1 ( 1 4 β ¯ 1 m s W ^ c T X W ^ c ( b c ( Z ^ ( t f ) , 0 ) β ¯ 2 T m t + Y 2 Y 1 β ¯ 1 T ) W ^ c ) ,
where c 1 > 0 is the parameter to be designed. J 1 is designed in Remark 4. Y 1 R L × 1 , Y 2 R L × L are the vector and matrix to be designed, respectively. β ¯ 1 , β ¯ 2 , m s , m t are expressed as β ¯ 1 = β 1 1 + β 1 T β 1 , β ¯ 2 = β 2 1 + β 2 T β 2 , m s = 1 + β 1 T β 1 , m t = 1 + β 2 T β 2 , where β 1 and β 2 are written as
β 1 = t b c ( Z , t f t ) + Z b c ( Z , t f t ) ( F ( Z ) + G U ^ ) R L × 1 ,
β 2 = b c ( Z ^ ( t f ) , 0 ) R L × 1 .
Φ is given as
Φ = 0 , i f Z J 1 T ( F ( Z ) + G U ^ ) < 0 1 , o t h e r w i s e .
Remark 3. 
Considering the optimal approximation property of the NN, Assumption 5 is reasonable. In addition, the optimal control U * can be obtained when ζ = 1 in Equation (66). Taking Lemma 1 into account, U * can stabilize the nominal system (43); that is, F ( Z ) + G U * is bounded. Simultaneously, considering Assumptions 1 and 5, Assumption 6 is reasonable. Taking Remark 3 into consideration, the optimal control input U * can stabilize the nominal system (43). Hence, we can always find a Lyapunov function J 1 where the derivative of J 1 with respect to t is negative and bounded. In general, J 1 can be designed as J 1 = 1 2 Z T Z . Thus, Assumption 7 is reasonable.
Remark 4. 
According to the expression of β ¯ 1 , m s and m t , we have that β ¯ 1 , 1 m s and 1 m t are bounded as β ¯ 1 β ¯ 1 M , 0 < 1 m s 1 , 0 < 1 m t 1 , respectively.
Remark 5. 
The first and second terms in Equation (77) are employed to minimize the total squared error based on gradient descent theory. Moreover, the third term is used to enhance the stabilizing ability of the controller. In more detail, according to Equation (80), the third term disappears when Z J 1 T ( F ( Z ) + G U ^ ) < 0 , which can be treated as a stability characteristic of the system, while the third term is activated to reinforce the stability of the system when the stability characteristic is gone. Thus, the requirement of an initial admissible control is avoided. In addition, the fourth term is designed for the UUB stability of the system in the subsequent process of the proof.

5. Stability Analysis

In this section, Theorem 1 is proposed to analyze the stability of the closed system controlled by the backstepping and finite-horizon optimal control methods. Theorem 1 is given as follows.
Theorem 1. 
For the system comprising (1) and (2) with associated finite-horizon cost function (50), the backstepping control inputs and finite-horizon optimal control inputs are designed as Equations (26), (36), and (71), respectively. The virtual control input q c and unconstrained input control input u are designed as Equations (16) and (17), respectively. The NN weights vectors tuning laws are given by Equations (27), (37), and (77). Then, the closed-loop system states errors z 1 , z 2 , the weights vectors estimation errors W ˜ 1 , W ˜ 2 , W ˜ c , the disturbance estimation errors D ˜ 1 , D ˜ 2 , the DSC system state error e, and the auxiliary system state error S 2 are UUB with appropriate designed parameters.
Proof. 
We consider the following Lyapunov function as
J = V b + 1 2 W ˜ c T c 1 1 W ˜ c ,
where V b is defined as Equation (48).
Differentiating J yields
J ˙ = V ˙ b + W ˜ c T c 1 1 W ˜ ˙ c ,
where V ˙ b is given as Equation (49).
Next, W ˜ c T c 1 1 W ˜ ˙ c is derived as follows.
Invoking Equations (69)–(71) and (78) into (72) yields
e c = Z T Q Z + W ^ c T β 1 + 1 4 W ^ c T X W ^ c .
Invoking Equation (63) yields
t T b c ( Z , t f t ) W c + W c T Z b c ( Z , t f t ) F ( Z ) = Z T Q Z + 1 4 W c T X W c ε H J B .
Invoking Equations (68)–(70) and (84) into (83) yields
e c = W ˜ c T β 1 + W c T β 1 W c T β 1 4 W c T X W c + 1 4 W ^ c T X W ^ c ε H J B ,
where
β = t b c ( Z , t f t ) + Z b c ( Z , t f t ) ( F ( Z ) 1 2 G R 1 G T Z T b c ( Z , t f t ) W c ) .
Invoking Equations (64), (71), (78), and (86) yields
W c T β 1 W c T β = 1 2 W c T X W ^ c + 1 2 W c T X W c .
Invoking Equation (87) yields
W c T β 1 W c T β 1 4 W c T X W c + 1 4 W ^ c T X W ^ c = 1 4 W ˜ c T X W ˜ c .
Invoking Equation (88) into (85) yields
e c = W ˜ c T β 1 + 1 4 W ˜ c T X W ˜ c ε H J B .
Invoking Equation (89) yields
W ˜ c T β ¯ 1 m s e c = W ˜ c T β ¯ 1 β ¯ 1 T W ˜ c + 1 4 W ˜ c T β ¯ 1 m s W ˜ c T X W ˜ c W ˜ c T β ¯ 1 m s ε H J B .
In addition, we have
W ˜ c T X W ˜ c = 2 W c T X W ˜ c W c T X W c + W ^ c T X W ^ c .
Invoking Equations (52), (58), and (68) into (74) yields
e t f = W c T b ˜ c ( Z ( t f ) , 0 ) + ε ( Z , t f ) + W ˜ c T b c ( Z ^ ( t f ) , 0 ) ,
where
b ˜ c ( Z ( t f ) , 0 ) = b c ( Z ( t f ) , 0 ) b c ( Z ^ ( t f ) , 0 ) .
Invoking Equations (68) and (90)–(92) into (77) yields
W ˜ c T c 1 1 W ˜ ˙ c = W ˜ c T β ¯ 1 β ¯ 1 T W ˜ c W ˜ c T Y 2 W ˜ c + W ˜ c T β ¯ 1 ( Y 1 T + 1 2 1 m s W c T X ) W ˜ c + W ˜ c T β ¯ 1 ( 1 4 1 m s W c T X W c + 1 m s κ 1 ) + W ˜ c T ( b c ( Z ^ ( t f ) , 0 ) β ¯ 2 m t + Y 2 ) W c Y 1 β ¯ 1 T W c + β ¯ 2 m t κ 2 ) 1 2 W ˜ c T Φ Z b c ( Z , t f t ) G R 1 G T J 1 z ,
where
κ 1 = ε H J B
κ 2 = W c T b ˜ c ( Z ( t f ) , 0 ) + ε ( Z , t f ) .
Invoking Equations (49), (82), and (94) yields
J ˙ Z T J 1 ( F ( Z ) + G U ^ ) ( k 1 1 ) z 1 2 ( k 2 + 1 2 τ 1 2 ) z 2 2 ( 3 2 τ 1 2 ) e 2 ( k a u x 1 2 1 2 θ a u x g 2 2 ) S 2 2 1 2 ( τ 1 ι 1 1 ) W 1 ˜ 2 1 2 ( τ 2 ι 2 1 ) W 2 ˜ 2 ( L 1 1 2 1 2 ι 1 a 1 M 2 ) D ˜ 1 2 ( L 2 1 2 1 2 ι 2 a 2 M 2 ) D ˜ 2 2 + 1 2 D ˙ 1 2 + 1 2 D ˙ 2 2 + 1 2 τ 1 W 1 * 2 + τ 2 2 W 2 * 2 + 1 2 θ a u x 1 Δ ¯ 2 + M 2 W ˜ c T β ¯ 1 β ¯ 1 T W ˜ c + W ˜ c T β ¯ 1 ( Y 1 T + 1 2 1 m s W c T X ) W ˜ c W ˜ c T Y 2 W ˜ c + W ˜ c T β ¯ 1 ( 1 4 1 m s W c T X W c + κ 1 m s ) + W ˜ c T ( ( b c ( Z ^ ( t f ) , 0 ) β ¯ 2 T m t + Y 2 ) W c Y 1 β ¯ 1 T W c + β ¯ 2 m t κ 2 ) 1 2 W ˜ c T Φ Z b c ( Z , t f t ) G R 1 G T Z J 1 .
Let Ξ = z 1 , z 2 , W ˜ c T , W ˜ 1 T , W ˜ 2 T , D ˜ 1 , D ˜ 1 , e , S 2 T R ( 6 + L + n 1 + n 2 ) ; then, Equation (97) can be rewritten in the form of a matrix as
J ˙ Z T J 1 ( F ( Z ) + G U ^ ) Ξ T M ¯ Ξ + Ξ T N ¯ + d 1 2 W ˜ c T Φ Z b c ( Z , t f t ) G R 1 G T Z J 1 ,
where
M ¯ = M ¯ 11 0 0 0 0 0 0 0 0 0 M ¯ 22 0 0 0 0 0 0 0 0 0 M ¯ 33 0 0 0 0 0 0 0 0 0 M ¯ 44 0 0 0 0 0 0 0 0 0 M ¯ 55 0 0 0 0 0 0 0 0 0 M ¯ 66 0 0 0 0 0 0 0 0 0 M ¯ 77 0 0 0 0 0 0 0 0 0 M ¯ 88 0 0 0 0 0 0 0 0 0 M ¯ 99 R ( 6 + L + n 1 + n 2 ) × ( 6 + L + n 1 + n 2 )
N ¯ = 0 0 N ¯ 3 T 0 1 × n 1 0 1 × n 2 0 0 0 0 T R ( 6 + L + n 1 + n 2 )
M ¯ 11 = k 1 1 R 1
M ¯ 22 = k 2 + 1 2 τ 1 2 R 1
M ¯ 33 = β ¯ 1 β ¯ 1 T + Y 2 β ¯ 1 ( Y 1 + 1 2 1 m s W c T X )
M ¯ 44 = 1 2 ( τ 1 ι 1 1 ) I n 1 × n 1 R n 1 × n 1
M ¯ 55 = 1 2 ( τ 2 ι 2 1 ) I n 2 × n 2 R n 2 × n 2
M ¯ 66 = L 1 1 2 1 2 ι 1 a 1 M 2 R 1
M ¯ 77 = L 2 1 2 1 2 ι 2 a 2 M 2 R 1
M ¯ 88 = 3 2 τ 1 2 R 1
M ¯ 99 = k a u x 1 2 1 2 θ a u x g 2 2 R 1
N ¯ 3 = β ¯ 1 ( 1 4 1 m s W c T X W c + κ 1 m s ) + ( b c ( Z ^ ( t f ) , 0 ) β ¯ 2 T m t + Y 2 ) W c Y 1 β ¯ 1 T W c + β ¯ 2 m t κ 2
d = 1 2 D ˙ 1 2 + 1 2 D ˙ 2 2 + 1 2 τ 1 W 1 * 2 + 1 2 τ 2 W 2 * 2 + 1 2 θ a u x 1 Δ ¯ 2 + M 2 R 1 .
Let parameters k 1 , k 2 , τ , Y 1 , Y 2 , τ 1 , τ 2 , L 1 , L 2 , k a u x be chosen such that M ¯ 0 , and invoking Equation (98) yields
J ˙ Z T J 1 ( F ( Z ) + G U ^ ) λ min ( M ¯ ) Ξ 2 + Ξ N ¯ + d 1 2 W ˜ c T Φ Z b c ( Z , t f t ) G R 1 G T Z J 1 ,
where λ min ( M ¯ ) stands for the minimum eigenvalue of M ¯ .
Case 1: If Φ = 0 , then Z T J 1 Z ˙ < 0 ; that is, χ > 0 satisfies that 0 < χ Z ˙ . Thus, we have
Z T J 1 Z ˙ χ Z T J 1 < 0 .
Invoking Equation (113) into (112) yields
J ˙ χ Z T J 1 λ min ( M ¯ ) ( Ξ 1 2 N ¯ λ min ( M ¯ ) ) 2 + 1 4 N ¯ 2 λ min ( M ¯ ) + d .
Invoking Equation (114) yields that J ˙ < 0 , as long as one of the following conditions holds.
Z T J 1 > N ¯ 2 + 4 d λ min ( M ¯ ) 4 χ λ min ( M ¯ )
or
Ξ > N ¯ 2 + 4 d λ min ( M ¯ ) 4 λ min 2 ( M ¯ ) + 1 2 N ¯ λ min ( M ¯ )
Case 2: If Φ = 1 , then Z T J 1 Z ˙ 0 . Invoking Φ = 1 and Equations (62) and (71) into (112) yields
J ˙ Z T J 1 ( F ( Z ) + G U * ) λ min ( M ¯ ) Ξ 2 + Ξ N ¯ + d + 1 2 Z T J 1 G R 1 G T Z ε ( Z , t ) .
Taking Assumptions 1 and 5 into account, 1 2 G R 1 G T Z ε ( Z , t ) is bounded such as
1 2 G R 1 G T Z ε ( Z , t ) d M .
Invoking inequality (118) into (117) and considering Assumption 7 yields
J ˙ λ min ( Λ ) Z J 1 2 λ min ( M ¯ ) ( Ξ 1 2 N ¯ λ min ( M ¯ ) ) 2 + 1 4 N ¯ 2 λ min ( M ¯ ) + d + Z J 1 d M = λ min ( Λ ) ( Z J 1 d M 2 λ min ( Λ ) ) 2 + d M 2 4 λ min ( Λ ) λ min ( M ¯ ) ( Ξ 1 2 N ¯ λ min ( M ¯ ) ) 2 + 1 4 N ¯ 2 λ min ( M ¯ ) + d .
Invoking Equation (119) yields that J ˙ < 0 , as long as one of the following conditions holds.
Z T J 1 > N u m 4 λ min ( M ¯ ) λ min 2 ( Λ ) + d M 2 λ min ( Λ ) ,
where
N u m = λ min ( M ) d M 2 + λ min ( Λ ) N ¯ 2 + 4 λ min ( M ¯ ) λ min ( Λ ) d
or
Ξ > N u m 4 λ min 2 ( M ¯ ) λ min ( Λ ) + 1 2 N ¯ λ min ( M ¯ ) .
Combining Case 1 and Case 2, we can obtain the conclusion that the augmented state vector Ξ is UUB. This completes the proof. □
Remark 6. 
Recalling Assumptions 5, 6, and Remark 5 yields that N ¯ is bounded.
Remark 7. 
d is expressed in quadratic form, as shown in Equation (111). Hence, we have that d > 0 . Simultaneously, taking M ¯ 0 into account, N ¯ 2 + 4 d λ min ( M ¯ ) > 0 . In addition, recalling Assumptions 1, 2–4 and Remark 1 yields that d is bounded.
Remark 8. 
Recalling Assumption 7, we have that Λ 0 . Simultaneously, taking Remark 6 into account yields that d > 0 , M ¯ 0 . Thus, it can be guaranteed that N u m > 0 , 4 λ min ( M ¯ ) λ min 2 ( Λ ) > 0 , and 4 λ min 2 ( M ¯ ) λ min ( Λ ) > 0 .

6. Simulation Results

In this section, the simulation of the pitching attitude tracking of the aircraft controlled by the backstepping finite-horizon optimal control is described to illustrate the effectiveness of the proposed scheme. The parameters of the aircraft model and the designed control system are given in Table 2 and Table 3, respectively. The finite-horizon is selected as t f = 1 s . The terminal constraint is chosen as ψ ( Z ( t f ) , t f ) = 0 . The basis functions are designed as b c ( Z , t f t ) = [ z 1 exp ( 0.1 ( t f t ) ) ,   z 2 exp ( 0.1 ( t f t ) ) ,   z 1 2 ( t f t ) ,   z 2 2 ( t f t ) ,   z 1 3 ,   z 2 3 ,   sin ( z 1 ) exp ( 0.1 ( t f t ) ) ,   sin ( z 2 ) exp ( 0.1 ( t f t ) ) ,   sin ( 2 z 1 ) ,   sin ( 2 z 2 ) ,   tanh ( z 1 ) exp ( 0.1 ( t f t ) ) ,   tanh ( z 2 ) exp ( 0.1 ( t f t ) ) ,   tanh ( 2 z 1 ) , and tanh ( 2 z 2 ) ] T . The objective of the designed control law is to obtain an optimized performance in the finite-horizon t f under the guarantee of basic tracking ability.
In order to simulate system uncertainties Δ f 1 ( α ) , Δ f 2 ( α , q ) , aerodynamic parameter variations are set to 1.1 times the nominal value. The disturbances are given as d 1 = d 2 = 0.05 sin t . NN and NDO are designed as the estimators of the system uncertainties and external disturbances, respectively. In order to illustrate the effectiveness of the NN and NDO, the estimations of the sums of d 1 , Δ f 1 ( α ) and d 2 , Δ f 2 ( α , q ) are shown as Figure 2 and Figure 3, respectively. In addition, the comparison of the response of the angle of attack controlled with and without the designed estimators are shown in Figure 4. As shown in Figure 2 and Figure 3, the system uncertainties and external disturbances can be estimated accurately and quickly under the utility of NN and NDO. From Figure 4, it can be depicted that owing to the designed estimators, the adverse effects caused by the system uncertainties and external disturbances are greatly reduced so that the angle of attack can track the command signal more precisely.
In order to achieve the finite-horizon optimal control, the ADP algorithm is applied. The objective function is given as 1 2 e c T e c + 1 2 e t f T e t f in Equation (75). The objective of the ADP algorithm is to minimize the objective function. For the purpose of illustrating the effectiveness of the designed ADP algorithm, the response of 1 2 e c T e c + 1 2 e t f T e t f is shown in Figure 5. It can be observed that 1 2 e c T e c + 1 2 e t f T e t f gradually decreases and eventually converges to zero. Hence, the designed ADP algorithm is efficient.
The virtual control inputs q a defined in Equation (26), the control inputs u a defined in Equation (36), and q ^ * and u ^ * defined in Equation (71) are shown in Figure 6 and Figure 7, respectively. Under the action of the backstepping based finite-horizon optimal control inputs, the response of the angle of attack is shown in Figure 8. Furthermore, the response of the angle of attack controlled by the backstepping method with the same parameters is given in Figure 8 as a contrast. It can be illustrated that the system controlled by the designed backstepping based finite-horizon optimal control method can evolve in a finite-horizon optimal way. Thus, the conclusion can be drawn that a better performance can be obtained under the control of the backstepping-based finite-horizon optimal control method.

7. Conclusions

In this paper, a backstepping-based finite-horizon optimal control scheme is proposed to complete the task of angle of attack tracking in a finite-horizon optimal manner. An auxiliary system is designed to compensate for the input constraints. NN and NDO are applied to estimate the system uncertainties and external disturbances. Furthermore, the backstepping method containing NN and NDO is employed to ensure the stability of the system and suppress the adverse effects caused by the system uncertainties and external disturbances. In addition, the DSC technique is utilized to avoid the derivation operation in the process of the backstepping control. Moreover, the ADP algorithm is used to control the system in a finite-horizon optimal manner. In the design of the ADP, a critic NN is constructed by time-state-dependent feature functions to approximate the value function in the HJB equation. Finally, simulation results illustrate the effectiveness of the proposed backstepping-based finite-horizon optimal control scheme.
In future work, a practical experiment will be constructed and carried out to verify the proposed control method.

Author Contributions

Conceptualization, A.L., Y.S. and B.D.; methodology, A.L. and Y.S.; writing—original draft preparation, A.L. and Y.S.; writing—review and editing, A.L., Y.S. and B.D.; visualization, A.L. and Y.S.; supervision, B.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

Author Ang li was employed by the company Shenyang Aircraft Design and Research Institute Yangzhou Collaborative Innovation Research Institute Co., Ltd., Yangzhou, China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Derivation of Equation

Appendix A.1. The Derivation of Equation (33) Is as Follows

V ˙ 1 = z 1 z ˙ 1 + e e ˙ + W ˜ 1 T Ω 1 W ˜ ˙ 1 + D ˜ 1 D ˜ ˙ 1 = z 1 ( f 1 f 1 ( α c ) + q c * + z 2 k 1 z 1 L 1 1 W ˜ 1 T a 1 ( α ) + D ˜ 1 + S 2 ) + e ( e τ + ( q ˙ c ) ) + W ˜ 1 T Ω 1 ( Ω 1 1 ( a 1 ( α ) z 1 L 1 1 τ 1 W ^ 1 ) ) + D ˜ 1 ( D ˙ 1 L 1 D ˜ 1 + W ^ 1 T a 1 ( α ) z 1 ) = ( f 1 f 1 ( α c ) + q c * ) + z 1 z 2 k 1 z 1 2 e 2 τ + e ( q ˙ c ) + z 1 S 2 τ 1 W ˜ 1 T W ^ 1 + D ˜ 1 D ˙ 1 L 1 D ˜ 1 2 + D ˜ 1 W ^ 1 T a 1 ( α )

Appendix A.2. The Derivation of Equation (34) Is as Follows

W ˜ 1 T W ^ 1 = 1 2 W ˜ 1 T ( W ˜ 1 + W 1 * ) + 1 2 ( W ^ 1 W 1 * ) T W ^ 1 = 1 2 W ˜ 1 2 + 1 2 W ^ 1 2 + 1 2 W ˜ 1 T W 1 * 1 2 W 1 * T W ^ 1 = 1 2 W ˜ 1 2 + 1 2 W ^ 1 2 1 2 W 1 * 2 1 2 W ˜ 1 2 1 2 W 1 * 2

Appendix A.3. The Derivation of Equation (35) Is as Follows

V ˙ 1 z 1 ( f 1 f 1 ( α c ) + q c * ) + 1 2 z 1 2 + 1 2 z 2 2 k 1 z 1 2 e 2 τ + 1 2 e 2 + 1 2 M 2 + 1 2 z 1 2 + 1 2 S 2 2 1 2 τ 1 W ˜ 1 2 + 1 2 τ 1 W 1 * 2 + 1 2 D ˜ 1 2 + 1 2 D ˙ 1 2 L 1 D ˜ 1 2 + 1 2 ι 1 a 1 2 D ˜ 1 2 + 1 2 ι 1 1 W ˜ 1 2 = z 1 ( f 1 f 1 ( α c ) + q c * ) ( k 1 1 ) z 1 2 + 1 2 z 2 2 ( 1 τ 1 2 ) e 2 + 1 2 S 2 2 1 2 ( τ 1 ι 1 1 ) W ˜ 1 2 ( L 1 1 2 1 2 ι 1 a 1 2 ) D ˜ 1 2 + 1 2 D ˙ 1 2 + 1 2 τ 1 W 1 * 2 + 1 2 M 2

Appendix A.4. The Derivation of Equation (38) Is as Follows

z ˙ 2 = f 2 + L 2 1 W 2 * T a 2 ( α , q ) + D 2 q ˙ c + k a u S 2 + g 2 u * + g 2 u a = f 2 f 2 ( α c , q c ) + g 2 u * L 2 1 W ^ 2 T a 2 ( α , q ) + L 2 1 W 2 * T a 2 ( α , q ) + D 2 D ^ 2 k 2 z 2 + λ ˙ q ˙ c = f 2 f 2 ( α c , q c ) + g 2 u * L 2 1 W ˜ 2 T a 2 ( α , q ) + D ˜ 2 k 2 z 2 + e ˙ = f 2 f 2 ( α c , q c ) + g 2 u * L 2 1 W ˜ 2 T a 2 ( α , q ) + D ˜ 2 k 2 z 2 e τ q ˙ c

References

  1. Castañeda, H.; Salas-Peña, O.S.; de León-Morales, J. Extended observer based on adaptive second order sliding mode control for a fixed wing UAV. ISA Trans. 2017, 66, 226–232. [Google Scholar] [CrossRef] [PubMed]
  2. Lee, C.H.; Chung, M.J. Gain-scheduled state feedback control design technique for flight vehicles. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 173–182. [Google Scholar]
  3. Snell, S.A.; Enns, D.F.; Garrard, W.L. Nonlinear inversion flight control for a supermaneuverable aircraft. J. Guid. Control. Dyn. 1992, 15, 976–984. [Google Scholar] [CrossRef]
  4. Lungu, M. Stabilization and control of a UAV flight attitude angles using the backstepping method. World Acad. Sci. Eng. Technol. 2012, 6, 241–248. [Google Scholar]
  5. Zhang, J.; Sun, C.; Zhang, R.; Qian, C. Adaptive sliding mode control for re-entry attitude of near space hypersonic vehicle based on backstepping design. IEEE/CAA J. Autom. Sin. 2015, 2, 94–101. [Google Scholar] [CrossRef]
  6. Kang, Y.; Chen, S.; Wang, X.; Cao, Y. Deep convolutional identifier for dynamic modeling and adaptive control of unmanned helicopter. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 524–538. [Google Scholar] [CrossRef] [PubMed]
  7. Wu, D.; Chen, M.; Gong, H.; Wu, Q. Robust backstepping control of wing rock using disturbance observer. Appl. Sci. 2017, 7, 219. [Google Scholar] [CrossRef]
  8. Wu, D.; Chen, M.; Gong, H. Robust control of post-stall pitching maneuver based on finite-time observer. ISA Trans. 2017, 70, 53–63. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, H.; Lu, G.; Zhong, Y. Robust LQR Attitude Control of a 3-DOF Laboratory Helicopter for Aggressive Maneuvers. IEEE Trans. Ind. Electron. 2013, 60, 4627–4636. [Google Scholar] [CrossRef]
  10. Zarei, J.; Montazeri, A.; Motlagh, M.R.J.; Poshtan, J. Design and comparison of LQG/LTR and H controllers for a VSTOL flight control system. J. Frankl. Inst. 2007, 344, 577–594. [Google Scholar] [CrossRef]
  11. Bellman, R. Dynamic programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef] [PubMed]
  12. Chanane, B. Optimal control of nonlinear systems: A recursive approach. Comput. Math. Appl. 1998, 35, 29–33. [Google Scholar] [CrossRef]
  13. Mracek, C.P.; Cloutier, J.R. Control designs for the nonlinear benchmark problem via the state-dependent Riccati equation method. Int. J. Robust Nonlinear Control 1998, 8, 401–433. [Google Scholar] [CrossRef]
  14. Werbos, P. Approximate Dynamic Programming for Real-Time Control and Neural Modeling; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  15. Wei, Q.; Song, R.; Yan, P. Data-driven zero-sum neuro-optimal control for a class of continuous-time unknown nonlinear systems with disturbance using ADP. IEEE Trans. Neural Netw. Learn. Syst. 2017, 27, 444–458. [Google Scholar] [CrossRef] [PubMed]
  16. Ferrari, S.; Stengel, R.F. Online adaptive critic flight control. J. Guid. Control. Dyn. 2004, 27, 777–786. [Google Scholar] [CrossRef]
  17. Ferrari, S.; Steck, J.E.; Chandramohan, R. Adaptive feedback control by constrained approximate dynamic programming. IEEE Trans. Syst. Man Cybern. Part 2008, 38, 982–987. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, Y.; Kampen, E.J.V.; Chu, Q.P. Incremental approximate dynamic programming for nonlinear adaptive tracking control with partial observability. J. Guid. Control. Dyn. 2018, 41, 1–14. [Google Scholar] [CrossRef]
  19. Fan, Q.Y.; Yang, G.H. Adaptive actor–critic design-based integral sliding-mode control for partially unknown nonlinear systems with input disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2017, 27, 165–177. [Google Scholar] [CrossRef] [PubMed]
  20. Sun, J.; Liu, C. Backstepping-based adaptive dynamic programming for missile-target guidance systems with state and input constraints. J. Frankl. Inst. 2018, 355, 8412–8440. [Google Scholar] [CrossRef]
  21. Sun, J.; Liu, C.; Zhao, X. Backstepping-based zero-sum differential games for missile-target interception systems with input and output constraints. IET Control Theory Appl. 2018, 12, 243–253. [Google Scholar] [CrossRef]
  22. Xia, R.; Chen, M.; Wu, Q. Neural network based optimal adaptive attitude control of near-space vehicle with system uncertainties and disturbances. Proc. Inst. Mech. Eng. Part J. Aerosp. Eng. 2019, 233, 641–656. [Google Scholar] [CrossRef]
  23. Cui, X.; Zhang, H.; Luo, Y.; Zu, P. Online finite-horizon optimal learning algorithm for nonzero-sum games with partially unknown dynamics and constrained inputs. Neurocomputing 2016, 185, 37–44. [Google Scholar] [CrossRef]
  24. Cheng, T.; Lewis, F.L.; Abu-Khalaf, M. A neural network solution for fixed-final time optimal control of nonlinear systems. Automatica 2007, 43, 482–490. [Google Scholar] [CrossRef]
  25. Zhao, Q.; Xu, H.; Jagannathan, S. Neural network-based finite-horizon optimal control of uncertain affine nonlinear discrete-time systems. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 486–499. [Google Scholar] [CrossRef] [PubMed]
  26. Sun, J.; Liu, C. Finite-horizon differential games for missile–target interception system using adaptive dynamic programming with input constraints. Int. J. Syst. Sci. 2018, 49, 264–283. [Google Scholar] [CrossRef]
  27. Xu, H.; Jagannathan, S. Neural network-based finite horizon stochastic optimal control design for nonlinear networked control systems. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 472–485. [Google Scholar] [CrossRef] [PubMed]
  28. Chen, M.; Tao, G.; Jiang, B. Dynamic surface control using neural Networks for a class of uncertain nonlinear systems with input saturation. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2086–2097. [Google Scholar] [CrossRef] [PubMed]
  29. Abu-Khalaf, M.; Lewis, F.L. Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach. Automatica 2005, 41, 779–791. [Google Scholar] [CrossRef]
  30. Dierks, T.; Jagannathan, S. Optimal control of affine nonlinear continuous-time systems. In Proceedings of the 2010 American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 1568–1573. [Google Scholar]
  31. Wang, D.; Liu, D.; Li, H.; Ma, H. Neural-network-based robust optimal control design for a class of uncertain nonlinear systems via adaptive dynamic programming. Inf. Sci. 2014, 282, 167–179. [Google Scholar] [CrossRef]
  32. Xu, H.; Zhao, Q.; Dierks, T.; Jagannathan, S. Neural network-based finite-horizon approximately optimal control of uncertain affine nonlinear continuous-time systems. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 1243–1248. [Google Scholar]
Figure 1. The control block diagram of the proposed method.
Figure 1. The control block diagram of the proposed method.
Aerospace 12 00653 g001
Figure 2. The estimations of the sum of d 1 and Δ f 1 ( α ) .
Figure 2. The estimations of the sum of d 1 and Δ f 1 ( α ) .
Aerospace 12 00653 g002
Figure 3. The estimations of the sum of d 2 and Δ f 2 ( α , q ) .
Figure 3. The estimations of the sum of d 2 and Δ f 2 ( α , q ) .
Aerospace 12 00653 g003
Figure 4. The response of the angle of attack controlled with and without estimators.
Figure 4. The response of the angle of attack controlled with and without estimators.
Aerospace 12 00653 g004
Figure 5. The response of the objective function.
Figure 5. The response of the objective function.
Aerospace 12 00653 g005
Figure 6. The response of q a and q * .
Figure 6. The response of q a and q * .
Aerospace 12 00653 g006
Figure 7. The response of u a and u * .
Figure 7. The response of u a and u * .
Aerospace 12 00653 g007
Figure 8. The response of the angle of attack with and without finite-horizon optimization.
Figure 8. The response of the angle of attack with and without finite-horizon optimization.
Aerospace 12 00653 g008
Table 1. The illustration of aircraft model parameters.
Table 1. The illustration of aircraft model parameters.
α angle of attack γ flight path angle
qpitching rate I y y moment of inertia
δ z ( u ) constrained normal thrust vectoring angle q ¯ dynamic pressure
Mmass of aircraftSreference surface area of wing
Vairspeed of aircraft c ¯ mean aerodynamic chord
Llift force C m pitch moment aerodynamic coefficient
Tthrust x T distance between engine nozzle and center of mass
Table 2. The parameters of the aircraft model.
Table 2. The parameters of the aircraft model.
M (kg)10,617V (m/s)70
T (N)146,000 γ (∘)0
I y y (kg · m2)77,095 q ¯ (kg/(m · s2))2724
S (m2)57.7 c ¯ (m)4.4
x T (m)8.5 u M (∘)15
δ c (∘)−71.7 α c (∘)10
α 0 (∘)0 q 0 (∘)0
Table 3. The parameters of the control system.
Table 3. The parameters of the control system.
k 1 2 k 2 20
L 1 200 L 2 200
τ 1 100 τ 2 100
τ 0.1 k a u x 7.9
Y 1 40 · 1 1 14 Y 2 120 · I 14 × 14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, A.; Shen, Y.; Du, B. Backstepping-Based Finite-Horizon Optimization for Pitching Attitude Control of Aircraft. Aerospace 2025, 12, 653. https://doi.org/10.3390/aerospace12080653

AMA Style

Li A, Shen Y, Du B. Backstepping-Based Finite-Horizon Optimization for Pitching Attitude Control of Aircraft. Aerospace. 2025; 12(8):653. https://doi.org/10.3390/aerospace12080653

Chicago/Turabian Style

Li, Ang, Yaohua Shen, and Bin Du. 2025. "Backstepping-Based Finite-Horizon Optimization for Pitching Attitude Control of Aircraft" Aerospace 12, no. 8: 653. https://doi.org/10.3390/aerospace12080653

APA Style

Li, A., Shen, Y., & Du, B. (2025). Backstepping-Based Finite-Horizon Optimization for Pitching Attitude Control of Aircraft. Aerospace, 12(8), 653. https://doi.org/10.3390/aerospace12080653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop