A Novel Approach to Fixed-Time Stabilization for a Class of Uncertain Second-Order Nonlinear Systems

This paper is concerned with the problem of fixed-time stabilization for a class of uncertain second-order nonlinear systems. By delicately introducing extra manipulations in the feedback domination and revamping the technique of adding a power integrator, a new approach is developed, by which a state feedback controller, together with a suitable Lyapunov function, which is critical for verifying fixed-time convergence, can be explicitly organized to render the closed-loop system fixed-time stable. The major novelty of this paper is attributed to a subtle strategy that offers a distinct perspective in controller design as well as stability analysis in the problem of fixed-time stabilization for nonlinear systems. Finally, the proposed approach is applied to the attitude stabilization of a spacecraft to demonstrate its merits and effectiveness.


Introduction
Without doubt, the stabilization control of nonlinear systems is important as a first step in performing additional control objectives, such as output tracking, disturbance attenuation, and/or decoupling. In the past decades, global asymptotic stabilization of nonlinear systems has been widely recognized as a challenging problem and received a great deal of attention from the nonlinear control community. With the help of various mathematical tools, tremendous progress has been achieved toward the development of powerful design methodologies for global asymptotic stabilization, including backstepping design [1], feedback linearization [2], sliding mode control [3], fuzzy control [4], nonlinear H ∞ [5], and so on.
Compared to asymptotic stabilization, which means that the convergence rate is, at best, exponential with infinite settling time [6,7], finite-time stabilization is more attractive as the systems with finite-time convergence usually demonstrate some superior properties, such as faster convergence, high accuracies, and better robustness to uncertainties, and/or external disturbances [7][8][9][10][11], which are rather important for demanding applications. Being aware of these advantages, the finite-time stabilization problem has been intensively studied for nonlinear systems, and numerous interesting results have been obtained in the past decades (see, e.g., [12][13][14][15][16][17][18][19][20][21]). Among the existing results, owing to its benefits including fast response and easy implementation, the terminal sliding mode control [20], together with its nonsingular modification [21], has been extensively recognized as one of the most popular/effective approaches for finite-time stabilization. By designing a suitable nonlinear sliding surface while constructing a discontinuous controller, the phase of terminal sliding mode can be achieved in finite-time, and thereby guaranteeing finite-time stabilization of the closed-loop system [20][21][22].
It should be noted that the associated settling time of the finite-time design is intrinsically related to the initial states [11,23]. That is, the availability of initial states is somewhat critical for the settling-time estimates; this inevitably prevents us from applying finite-time schemes [24]. Fortunately, with the notion of fixed-time stability, along with the Lyapunov-like criteria recently presented in the seminal work [23], the potential drawback of finite-time schemes was resolved effectively. To be more specific, as stated in [23], the fixed-time stability not only implies global uniform finite-time stability but also provides a settling-time function being uniformly bounded by a tunable constant, which depends on design parameters but is independent of the initial states.
In other words, by fixed-time controller design, a predetermined bound of the settling time (function) can be accordingly assigned. Particularly, the fixed-time stabilization is very promising, especially when the organized controller is assigned intentionally to achieve certain control precision in a desired time interval [23,25,26]. Realizing this feature, research study has been more recently focused on the fixed-time stabilization of various nonlinear systems, for instance, high-order regulators [27], multi-agent systems [28,29], power systems [30], etc.
To the best of our knowledge, most of the existing studies on fixed-time stabilization are essentially concerned with scalar systems or single input control structures [24,[26][27][28][29][30]. For multivariable multi-input systems, very few results are available in the literature; see, for example, [31][32][33], in which the fixed-time stabilization of time-invariant linear and nonlinear systems are addressed, respectively. In fact, due to the complexity of multivariable nonlinear systems, as well as the lack of constructive/systematic strategies for ensuring the fixed-time convergence, a fundamental problem on how to organize a controller that renders multivariable nonlinear systems fixed-time stable remains largely open.
Being aware of the above obstacles, a new approach is subtly developed in this paper. Compared with the existing works [24,[26][27][28][29][30][31][32][33], the main contributions of this paper can be summarized from two aspects: (i) This paper is focused on the problem of fixed-time stabilization for time-varying second-order multivariable nonlinear systems; thus, compared with the existing results concerning scalar systems or single input systems (e.g., [24,[26][27][28][29][30]), we offer a novel insight on how to tackle the problem of fixed-time stabilization for a more general class of nonlinear systems. (ii) By introducing extra manipulations in the feedback domination, the technique of adding a power integrator [34] is skillfully revamped to develop a distinctive approach to the synthesis of a fixed-time stabilizer together with a Lyapunov function which is significantly important for verifying fixed-time convergence and stability.
Notations: All notations utilized throughout this paper are highlighted as follows. R is the set of real numbers, R + is the set of nonnegative real numbers, and R n denotes the n-dimensional Euclidean space. Furthermore, R n×m is the set of n × m real matrices, I n denotes the identity matrix of dimension n, (·) T represents the transpose of a vector or a matrix, and (·) + is the Moore-Penrose pseudoinverse of a matrix. Given a constant p ∈ {p ∈ R | p = p 1 /p 2 with p 1 being a positive integer and p 2 being a positive odd integer} ⊂ R, a vector y = (y 1 , . . . , y n ) T ∈ R n , and a diagonal matrix A = diag(a 1 , . . . , a n ) ∈ R n×n , for simplicity of notation we denote y p = (y p 1 , . . . , y p n ) T ∈ R n , A p = diag(a p 1 , . . . , a p n ) ∈ R n×n , and sign(y) = (sign(y 1 ), . . . , sign(y n )) T ∈ R n where sign(·) is the standard sign function satisfying sign(y) = 1 if y > 0, sign(y) = 0 if y = 0, and sign(y) = −1 if y < 0.

Problem Formulation
Consider a class of nonlinear systems described bẏ where x 1 = (x 11 , . . . , x 1n ) T ∈ R n , x 2 = (x 21 , . . . , x 2n ) T ∈ R n and x = (x T 1 , x T 2 ) T ∈ R 2n denote the system states, u ∈ R m is the control input, d(x, t) = (d 1 (x, t), . . . , d n (x, t)) T ∈ R n describes the model uncertainties and/or external disturbances, and f(x, t) and G(x, t) are smooth functions with rank(G(x, t)) = n for all (x, t) ∈ R 2n × R + , which in turn ensures the controllability of system (1) (see, e.g., [35]). The initial time described by t 0 is set to be zero, i.e., t 0 = 0, and the initial state of system (1) is denoted by x(0) = x 0 ∈ R 2n . It is worth mentioning that a very large class of physical systems can be represented by system (1), including spacecrafts [36], robotic manipulators [37], etc. Additionally, the solutions of system (1) are understood in the sense of Filippov [38] since the control input u = u(x, t) is admitted to be discontinuous (piecewise continuous) and d(x, t) is also assumed to be piecewise continuous and bounded as follows.

Assumption 1.
There exists a known constant ρ > 0 such that Under Assumption 1, the main objective of this paper is to design a controller u = u(x, t) that renders the origin of system (1) fixed-time stable in the sense of the following definition.

Definition 1 ([23]). Consider the nonlinear systeṁ
where x ∈ R n , t ∈ R + , and h : R n × R + → R n is discontinuous (piecewise continuous). The initial time is t 0 = 0 and the initial state is x(0) = x 0 . The solutions of system (2) are understood in the sense of Filippov [38]. Then, the origin of system (2) is said to be fixed-time stable if it is globally uniformly finite-time stable (see, e.g., [39]) and the settling-time function T(x 0 ) is globally uniformly bounded by a positive constant; i.e., there exists a positive constant T max > 0 such that T(x 0 ) ≤ T max for all x 0 ∈ R n .

Remark 1.
Compared to global uniform finite-time stability, the key feature of fixed-time stability is the uniformity of its settling time. To see this point more clearly, the following two examples are considered. First, it is easy to see that the origin of the systemẋ is globally uniformly finite-time stable with the settling-time function T(x(0)) = x 2 3 (0); specifically, the solutions of system (3) is of the form where x(0) is the initial state. When adding an additional drift term, that is, considering the system beloẇ one can easily obtain the solutions of system (4) is which means that the origin of system (4) is fixed-time stable with the settling-time function T(x(0)) satisfying T(x(0)) ≤ π/2 uniformly in x(0).

Technical Lemmas
We list four technical lemmas which will be constantly utilized in proving the main results of this paper. The proofs of Lemmas 1-3 are provided whereas the one of Lemma 4 can be found in [34,40]. Lemma 1. Let m ≥ 1 is a ratio of two odd integers. For any x, y ∈ R, the following inequality holds: Proof. It is sufficient to prove the case when x = y. Consider the function A direct calculation shows that g(s) takes its minimum at s = 0.5. This implies that Since m is a ratio of two odd integers, the result of Lemma 1 can be obtained by letting s = x/(x − y).
Putting the two cases together yields Lemma 2.
Proof. Two cases are considered in the proof. Case 1: When h(0) > 1, it is clear that Assume that S is nonempty. There exists and t 2 = inf M. By the continuity of h(·) and m(·), one has h( for all t ∈ [0, T]. With this in mind, it can be deduced from (6) that providing a contradiction; thus, S is empty. Letting t 4 be defined as one readily has h(t) ≤ 1 for all t ∈ [t 4 , ∞) due to h(t) being decreasing. Case 2: In the case when h(0) ≤ 1, using the approaches similar to those in Case 1, one can easily derive that h(t) = 0 for all t ∈ [t 5 , ∞) with Combining two cases shows that a conservation estimate of the time after which f (t) = 0 is exactly t r = t 4 + t 5 given by (5).
For any x, y ∈ R, the following inequality hold:

Fixed-Time Stabilizing Controller Design
We first summarize our approach to the construction of a fixed-time stabilizing controller for system (1) as follows.

Proof.
A new design philosophy for constructing a fixed-time stabilizer is presented in the proof. To be more specific, the technique of adding a power integrator [34] is skillfully modified and revamped by introducing extra manipulations in the construction of virtual controls so that a two-step design approach is developed whereby a fixed-time stabilizing controller is explicitly designed. Details are as follows.
Step 1: Choose the scalar function V 1 : R n → R as below which is obviously positive definite, proper (A scalar function γ : R n → R + is said to be proper if for any c > 0, the set γ −1 ([0, c]) is compact in R n .) and continuously differentiable. Clearly, the time derivative of V 1 (x) along the solutions of system (1) takes the following forṁ It follows thatV for all x 1 ∈ R n . By Lemmas 1 and 4, it can be verified that With this in mind, (10) becomeṡ for all x 1 ∈ R n .
Step 2: Based on x * 2 (x 1 ) = −2x As (1 + κ 1 ) is a ratio of two positive odd integers, it is obvious that there is a one-to-one correspondence between (ξ T 1 , ξ T 2 ) and (x T 1 , x T 2 ). Consider the scalar function V : R 2n → R as below Note that, V(x) is clearly positive definite, proper, and continuously differentiable (for the proofs, please refer to Appendix A). A direct calculation yieldṡ for all (x, t) ∈ (R 2n × R + ) \ N 1 where N 1 is the set of measure zero [42] defined below: For brevity, we let In addition, it is easy to see from (12) and Lemma 2 that Also, by Lemma 1, we obtain x 2i Using (14)- (16) and Lemma 4, we further have It follows from (13) and (17) thaṫ for all (x, t) ∈ (R 2n × R + ) \ N 1 . In order to guarantee the state convergence of the overall system, the controller u is designed as (9) in which a discontinuous (piecewise continuous) term ρ sign(ξ 2 ) is subtly included to produce the efforts for effectively compensating the influence of the uncertainties d(x, t). Substituting the controller (9) into (18) yieldṡ which is a set of measure zero. Now, similarly to the derivation of (16), it is not difficult to derive that In addition, it can be shown directly by Lemma 2 that With (20)- (22) in mind, we have which immediately leads tȯ for all (x, t) ∈ (R 2n × R + ) \ (N 1 ∪ N 2 ). Notably, it follows from [43] and (23) that with the initial state x 0 ∈ R 2n , the (non-unique) solutions x(t) of the closed-loop system (1) under the (piecewise continuous) controller (9) are well-defined on [0, ∞) and locally absolutely continuous; moreover, V(x(t)) is continuous, decreasing, and satisfies for all t ∈ [0, ∞). As 2 (2+κ 1 )κ 1 /2 > 0 and (2n) −κ 2 /2 2 (2+κ 2 )κ 1 /2 > 0, it readily follows from Lemma 3 that V(x(t)) = 0 for all t ∈ [T max , ∞) where T max is given by (8). This along with the fact of V(x) being positive definite, proper, and continuously differentiable leads to x(t) = 0 for all t ∈ [T max , ∞); i.e., the origin of the closed-loop system (1) under the (piecewise continuous) controller (9) is fixed-time stable.

Remark 2.
Notably, the controller parameters are simply κ 1 and κ 2 . Once κ 1 and κ 2 are determined, the associated settling-time estimate T max can be computed accordingly. In practice, T max can be suitably assigned by adjusting the parameters κ 1 and κ 2 thereby acquiring a smaller settling-time (convergence time) and its estimate, though this might increase the control effort accordingly.

Remark 3.
Although the technique of adding a power integrator was also employed in [24] to perform fixed-time stabilization, the system considered in [24] has only a single input. Unlike the results of [24], the approach developed in this paper is applicable, not only to a class of second-order multivariable multi-input systems, but also to systems with a single input. Moreover, the controller designed in [24] is continuous only so that the possible external disturbances were necessarily neglected in [24]. In contrast, by means of the approach presented in this paper, the resultant controller is discontinuous and therefore is capable of handling both the model uncertainties and external disturbances; of course, when there is no uncertainty/disturbance, the resultant controller becomes continuous.

Remark 4.
The presented controller (9) is constructed with the utilization of fractional powers so that the resultant control efforts provide a finite-time (fixed-time) state convergence; however, the convergence rate will be slower/worse when the initial state is far way from the origin. As shown in [30], a potential strategy achieving a fast convergence simultaneously for the case of initial states being close to or far way from the origin is to design controllers in a uniform way with considering concurrently the feedbacks of both linear and fractional powers forms. Addressing this issue will be one of our future research directions.

Remark 5.
In the proof of Theorem 1, two zero-measure sets N 1 and N 2 , having no influence on stability analysis, are isolated from the region of verifying the inequality (23); this means that in stability analysis of the closed-loop system it is enough to consider only the region of both d(x, t) and u being continuous. A notable feature of the closed-loop system is that in the case when (x(t), t) ∈ N 1 , the discontinuity of d(x, t) will result in an abrupt change in the values of the control signals; also, when (x(t), t) ∈ N 2 , the controller u becomes discontinuous and the chattering phenomenon might appear in the responses of the controller u.

Simulation Studies
The proposed design approach is now applied to the attitude stabilization problem of a spacecraft. Consider the attitude control model of a spacecraft shown in [35,44], which has the same form as (1) with n = m = 3. The system states of this model are the three Euler angles (φ, θ, ψ) and their derivatives (φ,θ,ψ), i.e., x 1 = (x 11 , x 12 , x 13 ) T = (φ, θ, ψ) T and x 2 = (x 21 , x 22 , x 23 ) T = (φ,θ,ψ) T . Moreover, the drift term f(x, t) is time-invariant and G(x, t) is a constant matrix [44]; thus, f(x, t) and G(x, t) are briefly denoted by f(x) = ( f 1 (x), f 2 (x), f 3 (x)) T and G, respectively, while having the following form [35]: Here, I x = 2000 N · m · sec 2 , I y = 400 N · m · sec 2 , and I z = 2000 N · m · sec 2 are the inertias of the coordinate axes, ω 0 = 1.0312 × 10 −3 rad/sec denotes the orbital rate, and s(·) and c(·) represent the sine and cosine functions, respectively. Additionally, we also assume that the attitude model suffers from the following discontinuous disturbances For demonstration, the parameters κ 1 and κ 2 are selected as κ 1 = −2/15 and κ 2 = 2/15, respectively. With these settings, the settling-time estimate is T max = 35 and the associated gain matrices L 1 (x), L 2 (x), and Φ(x) can be determined accordingly.
The simulation results shown in Figures 1 and 2 are conducted for the initial state x(0) = (0.5, 0.3, −0.48, −1.9, −1.2, 2) T . Clearly, Figure 1 shows that the finite-time stabilization task can be successfully performed by the corresponding control signals shown in Figure 2, where the abrupt changes in the control signals originate from the discontinuity of d(x, t) at t = 0.4 s. It can be found that the settling-time (convergence time) of state trajectories is much less than T max = 35 (i.e., the settling-time estimate). This in turn reveals that the fixed-time stabilization can be achieved by the controller designed by Theorem 1. In addition, Figure 3 depicts the convergence times of the simulations conducted with different initial states from which one can observe that the correspondence between the convergence time and initial state, and obtain, moreover, the same conclusion (i.e., the success of the fixed-time stabilization). Notably, this example exhibits the merits and effectiveness of the proposed approach.

Conclusions
This paper has addressed the problem of fixed-time stabilization for a class of second-order (multivariable) nonlinear systems. A new design approach was developed by skillfully introducing extra manipulations in the feedback domination and delicately revamping the technique of adding a power integrator. Under the presented approach, a state feedback fixed-time stabilizing controller and a Lyapunov function for verifying fixed-time convergence can be organized explicitly. An example of the spacecraft attitude stabilization was also presented to demonstrate the effectiveness of our method. where This shows that As x * 2i (0) = 0 and V 1 (x 1 ) is positive definite and proper, the remaining proof can be divided into two cases.
Case 1: If x = (x T 1 , x T 2 ) = 0 with x 1 = 0, we have Case 2: In the case when x = (0 T , x T 2 ) = 0, it follows from (A1) that Hence, one can conclude that V(x) is positive definite.