Next Article in Journal
An RF Fingerprinting Blind Identification Method Based on Deep Clustering for IoMT Security
Previous Article in Journal
TIANet: A Defect Classification Structure Based on the Combination of CNN and ViT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Selective Disturbance Elimination-Based Fixed-Time Consensus Tracking for a Class of Nonlinear Multiagent Systems

1
Institute of Engineering Thermophysics, Chinese Academy of Sciences, No. 11 North Fourth Ring West Road, Beijing 100190, China
2
National Key Laboratory of Science and Technology on Advanced Light-Duty Gas-Turbine, Beijing 100190, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(8), 1503; https://doi.org/10.3390/electronics14081503
Submission received: 4 March 2025 / Revised: 19 March 2025 / Accepted: 24 March 2025 / Published: 9 April 2025

Abstract

:
This paper addresses the problem of fixed-time consensus tracking for a class of nonlinear multiagent systems (MASs) with disturbances. We establish a novel fixed-time consensus tracking protocol with adaptive disturbance rejection capabilities, leveraging adaptive selective disturbance elimination (ASDE) technology. This protocol consists of a distributed fixed-time observer (DFTO), a fixed-time disturbance observer (FTDO), and an adaptive selective disturbance elimination backstepping controller (ASDE) with adaptive lumped disturbance compensation abilities. The DFTO estimates the leader’s output using the communication network topology of each follower, while the FTDO rapidly observes the lumped disturbances and their derivatives. By adding disturbance indicator terms and disturbance observation attenuation terms to the control law, the beneficial and harmful effects of disturbance are distinguished. Under favorable disturbance conditions, lumped disturbances can be used to accelerate tracking speed. If disturbances are harmful, they are adaptively compensated to improve tracking accuracy. Furthermore, the fixed-time stability of each part of the protocol is analyzed using Lyapunov theory. Simulation results show that, under different initial states and command inputs, the proposed method achieves faster convergence and smaller tracking errors compared to the adaptive conditional disturbance negaton backstepping controller (ACDN), conditional disturbance negation backstepping controller (CDN), and non-smooth backstepping controller (NBCDC), verifying the effectiveness of the proposed method. The research outcomes serve as a reference for future multiagent adaptive anti-disturbance cooperative control technology.

1. Introduction

Motivated by the increasing application demands in both military and civilian domains, the distributed cooperative control of multiagent systems (MASs) has garnered considerable attention in recent years. In the realm of multiagent distributed cooperative control, the design of leader–follower consensus tracking protocols emerges as a fundamental and vital issue. Therefore, designing a consensus tracking protocol with excellent performance is crucial.
Developing consensus tracking protocols for MASs with disturbances involves two key issues. The first issue focuses on the convergence of the consensus protocols. The convergence rate is a pivotal performance specification of MASs, which reflects specifications in classical control such as rise time, settling time, overshoot, and steady-state error. Hence, it warrants primary attention. Numerous relevant studies have been conducted, broadly categorized into four types: asymptotic consensus convergence [1,2], finite-time consensus convergence [3], fixed-time consensus convergence [4,5,6,7], and predefined/prescribed-time consensus convergence [8].
The conventional linear distributed consensus tracking protocols can only achieve asymptotic consensus convergence [9,10,11,12], enabling each agent in MASs to reach consensus tracking only as the time approaches infinity, among which the representative work includes [1,2]. However, the main challenges with asymptotic consensus convergence are as follows: (1) consensus can only be attained as the time approaches infinity, limiting its applicability, and (2) if the system contains a distributed state observer, a controller, and a disturbance observer, the consensus protocol does not satisfy the “separation principle”, and its stability must be analyzed as a whole, which is difficult for MASs with complex structures. The concept of finite-time consensus convergence dates back to the 1960s and has since been introduced into MASs. Although this method demonstrates commendable control precision, robustness, and anti-disturbance ability, its estimation of settling time relies on initial conditions. Additionally, the practicality of this method is significantly constrained when the states of each agent are unmeasurable.
To address the problems associated with the aforementioned finite-time consensus/finite-time control, researchers have introduced fixed-time consensus convergence/finite-time control methods [13,14,15,16,17,18,19,20,21,22]. One distinguishing feature is the incorporation of a fractional-order non-smooth term, enabling the estimation of an upper bound on the settling time without requiring any initial condition information. The fixed-time consensus was initially discussed and defined in [14,23], respectively. The initial investigation into fixed-time consensus for MASs is presented in [24], where a nonlinear control protocol is introduced to ensure fixed-time convergence for MASs characterized by basic integrator dynamics. Additionally, Zuo and Tie [25] developed a consensus protocol for MASs. Building upon this work, Fu and Wang [26] further refined the findings of [25] by providing an upper-bound estimation. Although these methods suffer from discrepancies between estimated and actual settling times, as well as dependency on controller parameters, their excellent “separation principle” is beneficial for the theoretical proof of consensus tracking in MASs with disturbances. Moreover, recent revolutionary advancements in electronic and computer technology have substantially enhanced the computing speed of modern controllers. As a result, the challenges mentioned above can be effectively addressed through computational evolution methods. Therefore, we primarily center on addressing the fixed-time consensus challenge.
The second critical issue pertains to disturbance rejection. Disturbances are pervasive and diverse in control systems, significantly affecting the performance of the systems. For instance, when motors operate in weak-magnetic-field regions, they face parameter uncertainties and load disturbances. This introduces strong nonlinearities into the motor dynamics, negatively impacting the system’s control accuracy [27]. Consequently, anti-disturbance is a critical concern in the high-performance control. In fact, active anti-disturbance control algorithms have been widely adopted in fields such as PMSMs, robotics, and unmanned aerial vehicles. As the integration of individual systems, MASs also encounter the significant challenge of disturbance rejection. In recent research, Liu et al. [28] investigated the issue of achieving fixed-time consensus tracking in 2-order MASs in the presence of disturbances. To address this, they utilized a discontinuous sliding mode approach combined with a continuous super-twisting control strategy within the integral sliding mode (ISM) framework, ensuring disturbance compensation within an expected convergence time. However, this approach fails to accurately capture the disturbance variations, leading to imprecise compensation. Fortunately, the disturbance observer can effectively capture the dynamic changes of lumped disturbances, providing a precise basis for system compensation. And the disturbance observer-based anti-disturbance control method [29,30] exhibits superior accuracy compared to high-gain control. Disturbance observers come in various forms, with the most common being Extended State Observer (ESO) [31,32,33], disturbance observer (DOB) [34,35,36], and Generalized Proportional Integral Observer (GPIO) [37], among others. In anti-disturbance control, various uncertainties, such as unmodeled dynamics, parametric disturbances, external interferences, and nonlinear couplings, are considered as lumped disturbances. By observing the lumped disturbances and using feedforward compensation, the anti-disturbance control method exhibits superior performance compared to robust control and adaptive control. Tan and Hu [38] first proposed a fixed-time anti-disturbance nonlinear consensus tracking protocol for MASs under disturbances. By designing a distributed fixed-time state observer, disturbance observer, and non-smooth anti-disturbance backstepping controller, it has been demonstrated that utilizing disturbance feedforward compensation can improve the performance of closed-loop systems. However, the aforementioned methods all treat disturbances as detrimental factors to be completely compensated or suppressed, overlooking scenarios where disturbances can benefit the system dynamics, such as accelerating convergence speed. Fortunately, some researchers turned attention to this issue [39,40], introducing a conditional disturbance negation (CDN) technology that has been applied to omnidirectional mobile robots [41], autonomous underwater vehicles [42], and other fields. The effectiveness of the CDN-based control algorithm lies in its ability to differentiate between beneficial and detrimental disturbances, resulting in enhanced convergence and robustness.
Existing conditional disturbance negation control methods rely on the sign of the disturbance indicator to distinguish between disturbance compensation and utilization. In some cases, high-frequency switching of this sign causes ripples that degrade performance. While Sun et al. [43] proposed an adaptive conditional disturbance negation method, it reduces static performance and requires careful parameter tuning. This remains a challenge, especially for nonlinear MASs with disturbances. To tackle this, we propose a fixed-time consensus tracking protocol based on adaptive selective disturbance elimination. The method enhances tracking speed and accuracy by introducing disturbance indicators and observation attenuation terms while simplifying parameter tuning. Compared to existing works, the main innovations and contributions of this paper include three aspects:
  • Unlike [43], this study introduces adaptive selective disturbance elimination in each agent’s tracking control design, incorporating disturbance indicator and disturbance observation attenuation terms in the control law. This distinguishes between beneficial and detrimental disturbances, leveraging beneficial disturbances to accelerate tracking and suppressing detrimental ones to improve tracking accuracy.
  • In this study, a fixed-time disturbance observer is introduced to rapidly and accurately estimate lumped disturbances and their derivatives arising from unmodeled dynamics, parameter perturbations, external disturbances, and nonlinear couplings. Utilizing Lyapunov theory, the fixed-time stability of the disturbance observer is proven, representing a novel contribution to the field of multiagent consensus tracking.
  • Additionally, an application framework for a fixed-time tracking protocol based on adaptive selective disturbance elimination is proposed. The protocol includes a distributed fixed-time observer for estimating the leader’s output under specific network topologies, a fixed-time disturbance observer for rapid estimation of lumped disturbances, and a fixed-time active anti-disturbance controller based on adaptive selective disturbance elimination, integrating the strengths of backstepping, nonlinear dynamic inversion, and conditional disturbance negation. The protocol leverages the “separation principle” and analyzes its fixed-time stability based on Lyapunov theory, demonstrating significant innovation.
This paper is organized as follows: Section 2 gives a problem formulation. Section 3 presents the technical details and theoretical proofs of the proposed fixed-time consensus tracking protocol. Simulations and analyses are given in Section 4, while some conclusions are outlined in Section 5.

2. Problem Formulation

Consider a multiagent system encompassing one leader and N followers, denoted as 0 and i = 1 , 2 , , N , respectively. The leader’s operational dynamics are characterized by the equations
x ˙ 01 = f 01 ( x ¯ 01 ) + g 01 x ¯ 01 x 02 , x ˙ 02 = f 02 ( x ¯ 02 ) + g 02 x ¯ 02 x 03 , x ˙ 0 n = f 0 n ( x ¯ 0 n ) + g 0 n x ¯ 0 n u 0 , y 0 = x 01 ,
where x 0 = [ x 01 , x 02 , , x 0 n ] T R n is the vector representing the leader’s state, while x ¯ 0 j = [ x 01 , , x 0 j ] T . The control input applied to the leader, denoted u 0 , is a scalar, and the output produced by the leader is symbolized as y 0 . The functions f 0 j ( · ) and g 0 j ( · ) for j = 1 , 2 , , n are assumed to be continuously differentiable with locally Lipschitz derivatives.
In parallel, the dynamics of each follower are defined by the equations
x ˙ i 1 = f i 1 ( x ¯ i 1 ) + g i 1 x ¯ i 1 x i 2 + d i 1 , x ˙ i 2 = f i 2 ( x ¯ i 2 ) + g i 2 x ¯ i 2 x i 3 + d i 2 , x ˙ i n = f i n ( x ¯ i n ) + g i n x ¯ in u i + d i n , y i = x i 1 ,
where x i = [ x i 1 , , x i n ] T R n represents the state of the ith follower, and x ¯ i j = [ x i 1 , , x i j ] T . The scalar u i denotes the control input for the ith follower. d i j ( j = 1 , , n ) are disturbances. These disturbances consist of both model uncertainties and external factors. The functions f i j ( · ) and g i j ( · ) are pre-determined by system structure, with g i j ( · ) assumed to be non-singular entities.
The primary goal is to establish a fixed-time consensus tracking protocol, denoted by u i , utilizing only localized information for each follower to achieve fixed-time coordination with the leader. In the context of multi-drone cooperative control and task execution, this protocol can achieve information sharing among drones and ensure that the entire system reaches a consistent state within a fixed time, thereby improving the reliability and efficiency of task execution. The fixed-time property ensures that for any initial conditions y i ( 0 ) with i { 1 , 2 , , N } , there exists a finite time interval T max such that
lim t T max y i ( t ) y 0 ( t ) = 0 , y i ( t ) = y 0 ( t ) , t > T max .

3. Adaptive Selective Disturbance Elimination Consensus Tracking

3.1. Distributed Fixed-Time Observer

Considering the partial accessibility of the leader’s state information to follower agents, we develop a distributed estimation framework for strict-feedback nonlinear affine systems. This architecture enables each follower to reconstruct the leader’s output, even when direct state measurement is limited to a subset of agents. Let the leader’s state vector be denoted as x 0 , k R n ( k = 1 , 2 , , n ) and its estimation generated by the ith follower as x ^ 0 , k i . The proposed distributed fixed-time observer (DFTO) is constructed as follows:
x ^ ˙ 0 k i = α k j = 1 N a i j ( x ^ i 0 k x ^ j 0 k ) + b i ( x ^ i 0 k x 0 k ) 1 1 μ + β k j = 1 N a i j ( x ^ i 0 k x ^ j 0 k ) + b i ( x ^ i 0 k x 0 k ) 1 + 1 μ + f 0 k ( x ^ ¯ 0 k i ) + g 0 k ( x ^ ¯ 0 k i ) x ^ i 0 k + 1 + γ k sign j = 1 N a i j ( x ^ i 0 k x ^ j 0 k ) + b i ( x ^ i 0 k x 0 k ) · m = 1 k i = 1 N j = 1 N a i j ( x ^ i 0 m x ^ j 0 m ) + b i ( x ^ i 0 m x 0 m ) , k = 1 , 2 , , n 1 , x ^ ˙ 0 n i = α n j = 1 N a i j ( x ^ i 0 n x ^ j 0 n ) + b i ( x ^ i 0 n x 0 n ) 1 1 μ + β n j = 1 N a i j ( x ^ i 0 n x ^ j 0 n ) + b i ( x ^ i 0 n x 0 n ) 1 + 1 μ + f 0 n ( x ^ ¯ 0 n i ) + c n sign j = 1 N a i j ( x ^ i 0 n x ^ j 0 n ) + b i ( x ^ i 0 n x 0 n ) + γ n sign j = 1 N a i j ( x ^ i 0 k x ^ j 0 k ) + b i x ˜ i 0 k · m = 1 n i = 1 N j = 1 N a i j ( x ^ i 0 m x ^ j 0 m ) + b i ( x ^ i 0 m x 0 m ) .
Theorem 1.
If the observer parameters satisfy
α k = ε 2 λ min ( L + B ) 1 1 2 μ , k = 1 , 2 , , n , β k = ε · N 1 2 μ 2 λ min ( L + B ) 1 + 1 2 μ , k = 1 , 2 , , n , c n + g 0 n m a x ( x ¯ 0 n ) u 0 m a x 0 , γ k λ min ( L + B ) + max i ν f 0 k T ( ξ ik ) + max i ν g 0 k T ( η ik ) · x 0 , k + 1 m a x 0 , k = 1 , 2 , , n 1 , γ n λ min ( L + B ) + max i ν f 0 n T ( ξ in ) 0 ,
where ε < 0 and μ ( 0 , 1 ] , then the observation errors of distributed observer (4) converges to zero in a fixed time bounded by
T o : = n μ π ε .
For the corresponding definitions of the symbols and stability proofs, please refer to the paper [38].
Remark 1.
It is important to highlight that DFTO in this paper assumes a complete communication topology, where the information transmission between all followers and the leader is intact. However, in practical multiagent consensus tracking control scenarios, external noise or transmission delays may lead to incomplete information transmission in the communication network, affecting the overall control performance. In the future, research on enhancing the robustness of DFTO by addressing topology attacks will be meaningful.

3.2. Fixed-Time Disturbance Observer

Drawing upon the methodologies presented in references [38,44], this work introduces a fixed-time disturbance observer (FTDO) designed to estimate the lumped disturbances by the sliding mode technique, and its formulation is expressed as
z ˙ i j 1 = z i j 2 k i j 1 z i j 1 x i j ρ 1 k i j 2 z i j 1 x i j ϱ 1 + f i j ( x ¯ ij ) + g i j ( x ¯ ij ) x i ( j + 1 ) , z ˙ i j 2 = z i j 3 k i j 3 z i j 1 x i j ρ 2 k i j 4 z i j 1 x i j ] ϱ 2 , z ˙ i j l 1 = z i j l k i j 2 l 3 z i j 1 x i j ρ l 1 k i j 2 l 2 z i j 1 x i j ϱ l 1 , z ˙ i j l = k i j 2 l 1 z i j 1 x i j ρ l k i j 2 l z i j 1 x i j ϱ l , j = 1 , , n 1 ,
z ˙ i n 1 = z i n 2 k i n 1 z i n 1 x i n ρ 1 k i n 2 z i n 1 x i n ϱ 1 + f i n ( x ¯ in ) + g i n ( x ¯ in ) u i , z ˙ i n 2 = z i n 3 k i n 3 z i n 1 x i n ρ 2 k i n 4 z i n 1 x i n ] ϱ 2 , z ˙ i n l 1 = z i n l k i n 2 l 3 z i n 1 x i n ρ l 1 k i n 2 l 2 z i n 1 x i n ϱ l 1 , z ˙ i n l = k i n 2 l 1 z i n 1 x i n ρ l k i n 2 l z i n 1 x i n ϱ l ,
where ρ m ( 0 , 1 ) and ϱ m > 1 for m = 1 , 2 , , l satisfy ρ i = i ρ ( i 1 ) and ϱ i = i ϱ ( i 1 ) , with ρ ( 1 ι 1 , 1 ) and ϱ ( 1 , 1 + ι 2 ) for sufficiently small ι 1 > 0 and ι 2 > 0 . Observer gains k i j g , where g = 1 , 2 , , 2 l , are selected such that the matrix A 1 and A 2 are Hurwitz
A 1 = k i j 1 1 0 0 k i j 3 0 1 0 k i j 2 l 3 0 0 1 k i j 2 l 1 0 0 0 0 , A 2 = k i j 2 1 0 0 k i j 4 0 1 0 k i j 2 l 2 0 0 1 k i j 2 l 0 0 0 0 .
Observer variables z i j 1 , z i j 2 , , z i j l ( j = 1 , 2 , , n ) correspond to the estimations of x i j , d i j , ..., d i j ( l 2 ) .
Remark 2.
The rate at which the observer achieves convergence plays a pivotal role in determining the overall efficacy of the control system, especially in emergency situations, where it is desirable to preset the expected convergence time of the observer. The FTDO designed in this paper is notable for its ability to estimate lumped disturbances within a finite and fixed time. In practical applications, the selection of the FTDO’s order l is critical and needs be determined based on the characteristics of the disturbed system. Note that in FTDO, the ( l k + 1 ) t h order derivatives of z i · k ( k = 1 , l ) exist. There exists an inherent trade-off between the smoothness of disturbance estimation and the complexity of the FTDO. A fundamental compromise exists between the accuracy of disturbance estimation and the structural intricacy of the FTDO.
Theorem 2.
Considering systems described by (2), the observed values of lumped disturbances and their derivatives in the FTDO will reach a vicinity of the origin in a fixed time frame; the boundness of the convergence time can be expressed as
T i j _ ob λ m a x r ( P 1 ) r 1 r + 1 r 2 σ Υ σ ,
where r = 1 ρ , σ = ϱ 1 , r 1 = λ m i n ( Q 1 ) λ m a x ( P 1 ) , r 2 = λ m i n ( Q 2 ) λ m a x ( P 2 ) , positive real number Υ λ m i n ( P 2 ) , positive definite matrix Q 1 , Q 2 R n × n and P 1 , P 2 satisfy Q 1 + P 1 A 1 + A 1 T P 1 = 0 and Q 2 + P 2 A 2 + A 2 T P 2 = 0 .
Proof. 
Define the estimation errors as χ i j 1 = x i j z i j 1 , χ i j 2 = d i j z i j 2 , , χ i j l = d i j ( l 2 ) z i j l . By combining (7), (8) with (2), we can obtain
χ ˙ 1 = χ 2 k i j 1 χ 1 ρ 1 k i j 2 χ 1 ϱ 1 , χ ˙ 2 = χ 3 k i j 3 χ 1 ρ 2 k i j 4 χ 1 ϱ 2 , χ ˙ l 1 = χ l k i j 2 l 3 χ 1 ρ l 1 k i j 2 l 2 χ 1 ϱ l 1 , χ ˙ l = k i j 2 l 1 χ 1 ρ l k i j 2 l χ 1 ϱ l + d i j ( l 1 ) .
The proof process aligns with Theorem 1 from [44], though the detailed proof is omitted here for brevity. This implies that | d ˜ i j | = | d ^ i j d i j | δ i j . □

3.3. Adaptive Selective Disturbance Elimination Backstepping Controller

3.3.1. Adaptive Selective Disturbance Elimination Backstepping Controller Design

Define the state tracking error e i 1 = x i 1 x i 1 d , e i j = x i j x i j c ( j = 2 , n ) and filtering error ξ i j = x i j c x i j d ( j = 2 , n ) , where i represents the index of the follower, and j denotes the index of the state variable. The control design process is as follows.
Step 1: We consider x i 2 as a virtual control value to make x i 1 track x i 1 d . The dynamics of e i 1 can be expressed as
e ˙ i 1 = x ˙ i 1 x ˙ i 1 d ,
where x ˙ i 1 d is the derivative of x i 1 d .
Then, the stabilization function x i 2 d is
x i 2 d = g i 1 1 ( x ¯ i 1 ) [ K i 1 1 e i 1 K i 1 2 e i 1 μ 1 K i 1 3 e i 1 μ 2 f i 1 ( x ¯ i 1 ) + x ˙ i 1 d ( 1 e λ | e i 1 | ) sign ( d ^ i 1 e i 1 ) d ^ i 1 e λ | e i 1 | d ^ i 1 ] ,
where λ 0 , 0 < μ 1 < 1 , μ 2 > 1 , e is Euler’s number and K i 1 1 , K i 1 2 , K i 1 3 > 0 .
The first command filter is designed as
τ i 2 · x ˙ i 2 c = κ i 2 1 x i 2 c x i 2 d μ 1 κ i 2 2 x i 2 c x i 2 d κ i 2 3 x i 2 c x i 2 d μ 2 .
Obviously, the dynamic of ξ i 2 is
τ i 2 ξ ˙ i 2 = κ i 2 1 ξ i 2 μ 1 κ i 2 2 ξ i 2 κ i 2 3 ξ i 2 μ 2 τ i 2 x ˙ i 2 d ,
where τ i 2 is a time constant and κ i 2 1 , κ i 2 2 , and κ i 2 3 are appropriate positive gains. It is assumed that x ˙ i 2 d is bounded, and a known positive constant B i 2 exists satisfying B i 2 = sup x ˙ i 2 d .
Combining ξ i 2 = x i 2 c x i 2 d with e i 2 = x i 2 x i 2 c yields
x i 2 = x i 2 d + e i 2 + ξ i 2 .
Considering (2) and (16), x ˙ i 1 can be rewritten as
x ˙ i 1 = f i 1 ( x ¯ i 1 ) + g i 1 x ¯ i 1 ( x i 2 d + e i 2 + ξ i 2 ) + d i 1 , = K i 1 1 e i 1 K i 1 2 e i 1 μ 1 K i 1 3 e i 1 μ 2 + g i 1 x ¯ i 1 ξ i 2 + g i 1 x ¯ i 1 e i 2 ( 1 e λ | e i 1 | ) sign ( d ^ i 1 e i 1 ) d ^ i 1 e λ | e i 1 | d ^ i 1 + d i 1 + x ˙ i 1 d .
Next, substituting (17) into (12) yields
e ˙ i 1 = K i 1 1 e i 1 K i 1 2 e i 1 μ 1 K i 1 3 e i 1 μ 2 + g i 1 x ¯ i 1 ξ i 2 + g i 1 x ¯ i 1 e i 2 ( 1 e λ | e i 1 | ) d ^ i 1 sign ( d ^ i 1 e i 1 ) + ( 1 e λ | e i 1 | ) d i 1 e λ | e i 1 | ( d ^ i 1 d i 1 ) .
Step j: Utilize x i ( j + 1 ) , j = 2 , , n 1 to ensure x i j tracks x i j c . The dynamics of e i j can be described as
e ˙ i j = x ˙ i j x ˙ i j c .
Subsequently, x i ( j + 1 ) d can be defined as
x i ( j + 1 ) d = g i j 1 ( x ¯ ij ) [ K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 f i j ( x ¯ ij ) g i ( j 1 ) x ¯ i ( j 1 ) e i ( j 1 ) + x ˙ i j c ( 1 e λ | e i j | ) sign ( d ^ i j e i j ) d ^ i j e λ | e i j | d ^ i j ] ,
where K i j 1 , K i j 2 and K i j 3 > 0 .
Next, the command filter is designed as
τ i ( j + 1 ) · x ˙ i ( j + 1 ) c = κ i ( j + 1 ) 1 x i ( j + 1 ) c x i ( j + 1 ) d μ 1 κ i ( j + 1 ) 2 x i ( j + 1 ) c x i ( j + 1 ) d κ i ( j + 1 ) 3 x i ( j + 1 ) c x i ( j + 1 ) d μ 2 .
The dynamics of ξ i ( j + 1 ) are
τ i ( j + 1 ) ξ ˙ i ( j + 1 ) = κ i ( j + 1 ) 1 ξ i ( j + 1 ) μ 1 κ i ( j + 1 ) 2 ξ i ( j + 1 ) κ i ( j + 1 ) 3 ξ i ( j + 1 ) μ 2 τ i ( j + 1 ) x ˙ i ( j + 1 ) d ,
where τ i ( j + 1 ) represents a time constant. κ i ( j + 1 ) 1 , κ i ( j + 1 ) 2 , and κ i ( j + 1 ) 3 are designed to be positive. We assume that x ˙ i ( j + 1 ) d is bounded, and a known positive constant B i ( j + 1 ) such that B i ( j + 1 ) = sup x ˙ i ( j + 1 ) d .
Combining filtering error ξ i ( j + 1 ) = x i ( j + 1 ) c x i ( j + 1 ) d with tracking error e i ( j + 1 ) = x i ( j + 1 ) x i ( j + 1 ) c yields
x i ( j + 1 ) = x i ( j + 1 ) d + e i ( j + 1 ) + ξ i ( j + 1 ) .
Considering (2) and (16), x ˙ i 1 can be rewritten, and similarly, by substituting (23) into (2), we can rephrase x ˙ i j as
x ˙ i j = f i j ( x ¯ i j ) + g i j x ¯ ij ( x i ( j + 1 ) d + e i ( j + 1 ) + ξ i ( j + 1 ) ) + d i j , = K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 + g i j x ¯ ij ξ i ( j + 1 ) + g i j x ¯ ij e i ( j + 1 ) ( 1 e λ | e i j | ) sign ( d ^ i j e i j ) d ^ i j e λ | e i j | d ^ i j + d i j + x ˙ i j c .
Then, integrating (19) with (24) yields
e ˙ i j = K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 + g i j x ¯ i 1 ξ i ( j + 1 ) + g i j x ¯ ij e i ( j + 1 ) g i ( j 1 ) x ¯ i ( j 1 ) e i ( j 1 ) ( 1 e λ | e i j | ) d ^ i j sign ( d ^ i j e i j ) + ( 1 e λ | e i j | ) d i j e λ | e i j | ( d ^ i j d i j ) .
Step n: Unlike the previous cases, the dynamic of e i n is
e ˙ i n = x ˙ i n x ˙ i n c = f i n x ¯ i n + g i n x ¯ in u i + d i n x ˙ i n c .
Finally, the u i is designed as
u i = g i n 1 ( x ¯ in ) [ K i n 1 e i n K i n 2 e i n μ 1 K i n 3 e i n μ 2 f i n ( x ¯ in ) g i ( n 1 ) x ¯ i ( n 1 ) e i ( n 1 ) + x ˙ i n c ( 1 e λ | e i j | ) sign ( d ^ i j e i j ) d ^ i j e λ | e i j | d ^ i j ] ,
where K i n 1 , K i n 2 and K i n 3 > 0 .
Combining (26), (27) with (2), yields
e ˙ i n = K i n 1 e i n K i n 2 e i n μ 1 K i n 3 e i n μ 2 g i ( n 1 ) x ¯ i ( n 1 ) e i ( n 1 ) ( 1 e λ | e i j | ) d ^ i j sign ( d ^ i j e i j ) + ( 1 e λ | e i j | ) d i j e λ | e i j | ( d ^ i j d i j ) .
Remark 3.
During the design of control law, the derivative of the desired intermediate virtual control input x i j d ( j = 2 , , n ) will appear, as shown in formulas (13) and (20). The derivative of x i j d is difficult to obtain through analytical calculation, leading to the problem of “explosion of complexity”. To address this issue, we let x i j d pass through the filter to obtain x i j c and its derivative x ˙ i j c .

3.3.2. Analysis of System Stability

Theorem 3.
For the system (2), its closed-loop form can be derived using (13), (20) and (27) as follows:
e ˙ i 1 = K i 1 1 e i 1 K i 1 2 e i 2 μ 1 K i 1 3 e i 1 μ 2 + g i 1 x ¯ i 1 ξ i 2 + g i 1 x ¯ i 1 e i 2 ( 1 e λ | e i 1 | ) d ^ i 1 sign ( d ^ i 1 e i 1 ) + ( 1 e λ | e i 1 | ) d i 1 e λ | e i 1 | ( d ^ i 1 d i 1 ) , e ˙ i j = K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 + g i j x ¯ i 1 ξ i ( j + 1 ) + g i j x ¯ ij e i ( j + 1 ) g i ( j 1 ) x ¯ i ( j 1 ) e i ( j 1 ) ( 1 e λ | e i j | ) d ^ i j sign ( d ^ i j e i j ) + ( 1 e λ | e i j | ) d i j e λ | e i j | ( d ^ i j d i j ) , e ˙ i n = K i n 1 e i n K i n 2 e i n μ 1 K i n 3 e i n μ 2 g i ( n 1 ) x ¯ i ( n 1 ) e i ( n 1 ) ( 1 e λ | e i j | ) d ^ i j sign ( d ^ i j e i j ) + ( 1 e λ | e i j | ) d i j e λ | e i j | ( d ^ i j d i j ) , τ i j x ˙ i j c = κ i j 1 ξ i j μ 1 κ i j 2 ξ i j κ i j 3 ξ i j μ 2 , ( j = 2 , , n ) .
If the control gains satisfy
η i = 2 1 + μ 1 2 · min min j [ 1 , n ] { K i j 2 } , min j [ 2 , n ] { κ i j 1 τ i j } > 0 , ζ i = 2 1 + μ 2 2 ( 2 n 1 ) 1 μ 2 2 · min min j [ 1 , n ] { K i j 3 } , min j [ 2 , n ] { κ i j 3 τ i j } > 0 , σ i = 2 · min { min j [ 1 , n 1 ] { K i j 1 1 2 g ¯ i j ( x ¯ ij ) 2 } , K i n 1 1 2 , min j [ 2 , n ] { κ i j 2 τ i j 1 2 g ¯ i ( j 1 ) ( x ¯ i ( j 1 ) ) 2 } } = 0 ,
the e i j and ξ i j will converge to a small neighborhood near the origin. Notably, this convergence time T i c is independent of the initial values of the errors, and
T i c = T i _ o b + 2 ζ i ( μ 2 1 ) + 1 η i ( 2 1 + μ 1 2 1 ) 2 1 μ 1 ,
where T i _ o b is defined as the maximum value across all individual convergence times T i j _ o b .
Proof. 
Choose the Lyapunov function
W i = k = 1 n e i k 2 2 + k = 2 n ξ i k 2 2 .
The derivative with respect to time for Equation (32) is
W ˙ i = j = 1 n e i j e ˙ i j + j = 2 n ξ i j ξ ˙ i j = j = 1 n e i j [ K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 ( 1 e λ | e i j | ) d ^ i j sign ( d ^ i j e i j ) + ( 1 e λ | e i j | ) d i j e λ | e i j | ( d ^ i j d i j ) ] + j = 1 n 1 ξ i ( j + 1 ) g i j e i j + j = 2 n ξ i j κ i j 1 τ i j ξ i j μ 1 κ i j 2 τ i j ξ i j κ i j 3 τ i j ξ i j μ 2 x ˙ i j d = j = 1 n e i j K i j 1 e i j K i j 2 e i j μ 1 K i j 3 e i j μ 2 j = 1 n ( 1 e λ | e i j | ) ( | e i j d ^ i j | e i j d i j ) + e λ | e i j | e i j d ˜ i j + j = 1 n 1 ξ i ( j + 1 ) g i j e i j + j = 2 n ξ i j κ i j 1 τ i j ξ i j μ 1 κ i j 2 τ i j ξ i j κ i j 3 τ i j ξ i j μ 2 x ˙ i j d = j = 1 n K i j 1 e i j 2 K i j 2 e i j μ 1 e i j K i j 3 e i j μ 2 e i j j = 1 n ( 1 e λ | e i j | ) ( | e i j d ^ i j | e i j d i j ) + e λ | e i j | e i j d ˜ i j + j = 1 n 1 ξ i ( j + 1 ) g i j e i j + j = 2 n κ i j 1 τ i j ξ i j μ 1 ξ i j κ i j 2 τ i j ξ i j ξ i j κ i j 3 τ i j ξ i j μ 2 ξ i j x ˙ i j d ξ i j ,
Since the following equality holds,
( 1 e λ | e i j | ) ( | e i j d ^ i j | e i j d i j ) + e λ | e i j | e i j d ˜ i j = ( 1 e λ | e i j | ) | e i j d ^ i j | e i j ( d ^ i j d ˜ i j ) + e λ | e i j | e i j d ˜ i j = ( 1 e λ | e i j | ) ( | e i j d ^ i j | e i j d ^ i j ) + e i j d ˜ i j + e λ | e i j | e i j d ˜ i j
In (34), since the adaptive factor λ is greater than 0, it follows that 0 ( 1 e λ | e i j | ) < 1 . Meanwhile, | e i j d ^ i j | e i j 0 always holds true, and it can be concluded that
( 1 e λ | e i j | ) ( | e i j d ^ i j | e i j d i j ) + e λ | e i j | e i j d ˜ i j ( 1 e λ | e i j | ) e i j d ˜ i j + e λ | e i j | e i j d ˜ i j = e i j d ˜ i j .
Equation (33) can be reformulated into the subsequent expression.
W ˙ i j = 1 n K i j 1 e i j 2 K i j 2 e i j 1 + μ 1 K i j 3 e i j 1 + μ 2 e i j d ˜ i j + j = 1 n 1 ξ i ( j + 1 ) g i j e i j + j = 2 n κ i j 1 τ i j ξ i j 1 + μ 1 κ i j 2 τ i j ξ i j 2 κ i j 3 τ i j ξ i j 1 + μ 2 ξ i x ˙ i j d j = 1 n K i j 1 e i j 2 K i j 2 e i j 1 + μ 1 K i j 3 e i j 1 + μ 2 + e i j 2 2 + d ˜ i j 2 2 + j = 1 n 1 ξ i ( j + 1 ) g i j e i j + j = 2 n κ i j 1 τ i j ξ i j 1 + μ 1 κ i j 2 τ i j ξ i j 2 κ i j 3 τ i j ξ i j 1 + μ 2 + ξ i j 2 2 + x ˙ i j d 2 2 j = 1 n K i j 1 e i j 2 K i j 2 e i j 1 + μ 1 K i j 3 e i j 1 + μ 2 + e i j 2 2 + d ˜ i j 2 2 + j = 1 n 1 g i j 2 e i j 2 + j = 2 n κ i j 1 τ i j ξ i j 1 + μ 1 κ i j 2 τ i j ξ i j 2 κ i j 3 τ i j ξ i j 1 + μ 2 + ξ i j 2 2 + x ˙ i j d 2 2 + g i ( j 1 ) 2 ξ i j 2 = j = 1 n K i j 2 e i j 1 + μ 1 j = 2 n κ i j 1 τ i j ξ i j 1 + μ 1 j = 1 n K i j 1 1 2 e i j 2 + j = 1 n 1 g i j 2 e i j 2 j = 2 n κ i j 2 τ i j 1 2 g i ( j 1 ) 2 ξ i j 2 j = 1 n K i j 3 e i j 1 + μ 2 j = 2 n κ i j 3 τ i j ξ i j 1 + μ 2 + j = 1 n d ˜ i j 2 2 + j = 2 n x ˙ i j d 2 2 = j = 1 n K i j 2 ( e i j 2 ) 1 + μ 1 2 j = 2 n κ i j 1 τ i j ξ i j 2 1 + μ 1 2 j = 1 n K i j 1 1 2 e i j 2 + j = 1 n 1 g i j 2 e i j 2 j = 2 n κ i j 2 τ i j 1 2 g i ( j 1 ) 2 ξ i j 2 j = 1 n K i j 3 e i j 2 1 + μ 2 2 j = 2 n κ i j 3 τ i j ξ i j 2 1 + μ 2 2 + j = 1 n d ˜ i j 2 2 + j = 2 n x ˙ i j d 2 2 .
Define + j = 1 n d ˜ i j 2 2 + j = 2 n x ˙ i j d 2 2 . Since x i j d is bounded and | d ˜ i j | δ i j in Theorem 2, it follows that Θ is positive and bounded. Applying Lemma A.2 from [45], we obtain
W ˙ i min min j [ 1 , n ] { K i j 2 } , min j [ 2 , n ] { κ i j 1 τ i j } · j = 1 n e i j 2 1 + μ 1 2 + j = 2 n ξ i j 2 1 + μ 1 2 min min j [ 1 , n ] { K i j 3 } , min j [ 2 , n ] { κ i j 3 τ i j } · j = 1 n e i j 2 1 + μ 2 2 + j = 2 n ξ i j 2 1 + μ 2 2 min min j [ 1 , n 1 ] { K i j 1 1 2 g ¯ i j 2 } , K i n 1 1 2 , min j [ 2 , n ] { κ i j 2 τ i j 1 2 g ¯ i ( j 1 ) 2 } · j = 1 n e i j 2 + j = 2 n ξ i j 2 + Θ i .
With reference to Lemma 1 and Lemma 2 in [20], we can derive
W i ˙ 2 1 + μ 1 2 min min j [ 1 , n ] { K i j 2 } , min j [ 2 , n ] { κ i j 1 τ i j } · j = 1 n e i j 2 + j = 2 n ξ i j 2 1 + μ 1 2 2 · min min j [ 1 , n 1 ] { K i j 1 1 2 g ¯ i j 2 } , K i n 1 1 2 , min j [ 2 , n ] { κ i j 2 τ i j 1 2 g ¯ i ( j 1 ) 2 } · j = 1 n e i j 2 + j = 2 n ξ i j 2 2 1 + μ 2 2 ( 2 n 1 ) 1 μ 2 2 min min j [ 1 , n ] { K i j 3 } , min j [ 1 , n 1 ] { κ i j 3 τ i j } · j = 1 n e i j 2 + j = 2 n ξ i j 2 1 + μ 2 2 + Θ i 2 1 + μ 1 2 min min j [ 1 , n ] { K i j 2 } , min j [ 2 , n ] { κ i j 1 τ i j } W i 1 + μ 1 2 2 · min min j [ 1 , n 1 ] { K i j 1 1 2 g ¯ i j 2 } , K i n 1 1 2 , min j [ 2 , n ] { κ i j 2 τ i j 1 2 g ¯ i ( j 1 ) 2 } W i 2 1 + μ 2 2 ( 2 n 1 ) 1 μ 2 2 min min j [ 1 , n ] { K i j 3 } , min j [ 1 , n 1 ] { κ i j 3 τ i j } W i 1 + μ 2 2 + Θ i = η i W i 1 + μ 1 2 ζ i W i 1 + μ 2 2 + Θ i .
According to (38), we can easily see that W i is bounded within a finite time 0 , T i _ o b . When t > T i _ o b , Θ i 1 2 j = 1 n δ i j 2 + 1 2 j = 2 n B i j 2 can be guaranteed by Theorem 2.
Let Ξ i = [ e i 1 , , e i n , ξ i 1 , , ξ i ( n 1 ) ] T be defined as the error vector. Theorem 2 in [44] guarantees that Ξ converges to an arbitrarily small neighborhood of the origin
Ω = Ξ i W i Θ i η i 2 2 1 + μ 1 2 2 μ 1 + 1 .
When Ξ i Ω ¯ , we obtain
W ˙ i η i ( 2 1 + μ 1 2 1 ) W i 1 + μ 1 2 ζ i W i 1 + μ 2 2 .
The maximum limit for convergence time is
T i c = T i _ o b + 2 ζ i ( μ 2 1 ) + 1 η i ( 2 1 + μ 1 2 1 ) 2 1 μ 1 ,
this result is independent of the initial tracking and filtering errors, thereby concluding the proof. □
Remark 4.
Although we introduce a non-negative parameter λ when constructing the disturbance feedforward compensation term, this parameter is not explicitly involved in selecting control gains (30). Superficially, it may seem that because inequality (34) ingeniously cancels out all terms containing λ mathematically, this result is not purely coincidental. Let us consider selecting λ = 0 , where we will pleasantly discover that Theorem 3 degenerates into a general disturbance feedforward compensation method. Conversely, as λ approaches infinity, Theorem 3 will degenerate into a conditional disturbance rejection method. Therefore, both the general form of the disturbance compensation method and the conditional disturbance rejection method are special cases of Theorem 3, and the selection of control gains and stability formula derivation results from Theorem 3 are fully applicable to these two specific scenarios.
Remark 5.
According to fixed-time convergence theory, the larger the selection of control gains, the faster the convergence speed of trajectory tracking curves. However, huge gains often lead to input saturation, introducing nonlinear uncertain factors that affect system stability. It is well known that under the same selection of control gains, the conditional disturbance rejection method can utilize beneficial disturbances to accelerate the convergence speed of the control system. When disturbances and tracking errors have the same sign, the observed values compensate entirely for the disturbances. Conversely, when disturbances and tracking errors have opposite signs, the disturbances are used to expedite the convergence speed. However, in this case, the observation of the closed-loop equations (29) reveals that the tracking accuracy is inevitably affected due to the simultaneous presence of disturbance observation values and disturbances. This issue does not exist in general disturbance feedforward compensation, eliminating all disturbances, whether beneficial or not. Therefore, by considering λ ( 0 , ) , we can integrate the advantages of these two disturbance compensation methods. By adjusting the value of λ appropriately, we can not only accelerate the closed-loop system’s convergence speed but also significantly reduce tracking errors.
Remark 6.
We have made incremental improvements but significant enhancements to the existing control algorithm proposed by previous researchers [43]. In reference [43], only ( 1 e λ | e | ) d ^ sign ( d ^ e ) is used. As pointed out in the literature, excessively large values of λ will inevitably degrade tracking accuracy. When the ACDN governor behaves like the signum function, i.e., λ + , it will rapidly switch between utilization mode and compensation mode, causing speed fluctuations and deteriorating tracking accuracy. Therefore, this method greatly restricts the range of values for λ. In comparison to the method we proposed, when the tracking error e is significant, the conditional disturbance rejection part dominates, satisfying the requirements of system robustness and rapid convergence. When the tracking error is small ( e 0 ), the general disturbance compensation part dominates, significantly reducing tracking errors. Moreover, if sign ( d ^ e ) < 0 , ( 1 e λ | e | ) d ^ sign ( d ^ e ) = ( 1 e λ | e | ) d ^ and ( 1 e λ | e | ) d ^ sign ( d ^ e ) e λ | e | d ^ = ( 1 2 e λ | e | ) d ^ . If sign ( d ^ e ) > 0 , ( 1 e λ | e | ) d ^ sign ( d ^ e ) = d ^ + e λ | e | d ^ and ( 1 e λ | e | ) d ^ sign ( d ^ e ) e λ | e | d ^ = d ^ . This is why the algorithm we proposed performs better, despite only making slight modifications. Once the distributed fixed-time observer, fixed-time disturbance observer, and adaptive selective disturbance compensation-based non-smooth backstepping controller achieve convergence, the upper bound for the convergence time of the fixed-time consensus tracking protocol with adaptive selective disturbance elimination can be determined
T = T o + max i { 1 , , N } T i c .

4. Simulation Results

In this section, a three-order nonlinear system is utilized to illustrate the effectiveness of the proposed protocol. The MASs used in the simulation include one leader and five followers.
The leader’s dynamic is
x ˙ 01 ( t ) = x 02 ( t ) , x ˙ 02 ( t ) = f 02 ( x ¯ 02 ( t ) ) + 1 D x 03 ( t ) , x ˙ 03 ( t ) = f 03 ( x ¯ 03 ( t ) ) + 1 M u 0 ( t ) ,
The models of the five followers are
x ˙ i 1 ( t ) = x i 2 ( t ) + d i 1 ( t ) , x ˙ i 2 ( t ) = s i f i 2 ( x ¯ i 2 ( t ) ) + 1 D x i 3 ( t ) + d i 2 ( t ) , x ˙ i 3 ( t ) = s i f i 3 ( x ¯ i 3 ( t ) ) + 1 M u i ( t ) + d i 3 ( t ) .
Let x 01 , x 02 , x 03 , and u 0 denote the states and control input of the leader, while x i 1 , x i 2 , x i 3 and u i ( i = 1 , , 5 ) represent the states and control inputs of the five followers. The terms d i 1 , d i 2 , and d i 3 are disturbances affecting the followers. The coefficients s i take values sequentially from 0.1 to 0.5. For the leader model, f 02 ( x ¯ 02 ( t ) ) = N D sin ( x 01 ( t ) ) B D x 02 ( t ) , f 03 ( x ¯ 03 ( t ) ) = K m M H M x 03 ( t ) , and the values of these parameters N , B , D , M , H , K m are given in [46]. Note that f i 2 ( x ¯ i 2 ( t ) ) , f i 3 ( x ¯ i 3 ( t ) ) ( i = 1 , 2 , 5 ) in the follower models use the same calculation as f 02 ( x ¯ 02 ( t ) ) , f 03 ( x ¯ 03 ( t ) ) in the leader’s model.
The MASs’ communication topology is depicted in Figure 1. The Laplacian matrix is
L = 2 1 1 0 0 1 2 0 1 0 1 0 2 1 0 0 1 1 3 1 0 0 0 1 1 ,
the leader accessibility matrix is defined as B = diag [ 1 , 0 , 0 , 0 , 0 ] , and we can calculate λ min ( L + B ) = 0.1338 .
Remark 7.
Figure 1 illustrates the network topology of the MASs. Node 0 is designated as the leader, with nodes 1 to 5 functioning as the followers. Within the network topology, followers are configured to receive data from the leader, while the leader operates independently without accepting information from the followers. Additionally, the followers can communicate with each other, enabling local information exchange and coordination. This topology reflects a leader–follower hierarchy, a common structure in MASs for tasks such as formation control and distributed optimization. Excitingly, this topology exhibits scalability, enabling the integration of additional followers without altering the leader’s role, thereby extending its applicability to larger-scale multiagent systems. For more details on graph theory related to topology, please refer to [38].
To validate the effectiveness of the proposed method, we analyze two scenarios with different initial conditions for the followers. A sinusoidal command and a step command are given in scenarios 1 and 2, respectively. Furthermore, two initial conditions are assigned to each scenario:
( a ) x 0 = 0 0 0 , x 1 = 3 0 0 , x 2 = 2 0 0 , x 3 = 0 0 0 , x 4 = 2 0 0 , x 5 = 3 0 0 x ^ 0 1 = 2 0 0 , x ^ 0 2 = 1 0 0 , x ^ 0 3 = 0 0 0 , x ^ 0 4 = 1 0 0 , x ^ 0 5 = 2 0 0 ( b ) x 0 = 0 0 0 , x 1 = 30 0 0 , x 2 = 20 0 0 , x 3 = 0 0 0 , x 4 = 20 0 0 , x 5 = 30 0 0 x ^ 0 1 = 20 0 0 , x ^ 0 2 = 10 0 0 , x ^ 0 3 = 0 0 0 , x ^ 0 4 = 10 0 0 , x ^ 0 5 = 20 0 0
In Scenario 1, x 01 d = 2 sin ( 2.25 t ) , d i 1 = 10 sin ( π t ) , d i 2 = 2 sin ( π t ) and d i 3 = sin ( π t ) .
Remark 8.
The gains of DFTO are α 1 = α 2 = α 3 = 2.7 , β 1 = β 2 = β 3 = 7.8 , γ 1 = γ 2 = 0.1 λ min ( L + B ) , γ 3 = 0.4 λ min ( L + B ) , c 3 = 10 , with the exponents in the non-smooth terms chosen as μ = 2 . The order of the FTDO is selected as l = 4 . Its gains are selected as k i j 1 = 24 , k i j 2 = 24 , k i j 3 = 216 , k i j 4 = 216 , k i j 5 = 864 , k i j 6 = 864 , k i j 7 = 1296 , k i j 8 = 1296 , and the exponents in the non-smooth terms are ρ = 0.8 , ϱ = 1.2 . For the ASDE controller parameter design, the gains for the stable function x i 2 d are K i 1 1 = 1.1 , K i 1 2 = 0.8 , K i 1 3 = 0.1 , while gains for the stable function x 3 d are designed as K i 2 1 = 8.3 , K i 2 2 = 0.8 , K i 2 3 = 0.1 . The gains for the control law u i are chosen as K i 3 1 = 0.6 , K i 3 2 = 0.8 , K i 3 3 = 0.1 . The gains for the filters corresponding to x i 2 c are κ i 2 1 = 5 , κ i 2 2 = 0.2 , κ i 2 3 = 0.1 , and those for x i 3 c are designed as κ i 3 1 = 5 , κ i 3 2 = 0.83 , κ i 3 3 = 0.1 . The time constants of the command filters are selected as τ i 2 = τ i 3 = 0.1 . In the ASDE controller, the exponents of the non-smooth terms are chosen as μ 1 = 0.5 , μ 2 = 1.1 , with the adaptive term λ = 5 . Notably, the selected parameters satisfy the conditions specified in Theorem 1, Theorem 2, and Theorem 3.
In Scenario 2, the command is x 01 d = 0 , t 3.5 , 1.3 , t > 3.5 .
The disturbances and parameters, except for the γ 1 = γ 2 = γ 3 = 20 λ min ( L + B ) , c 3 = 50 , are all equal to those in Scenario 1. Prior to being sent to the leader, the command is processed through a filter, and its transfer function is 1 0.1 s + 1 .
The ITAE index is adopted to compare and analyze the performance of the aforementioned methods
J ITAE = 0 T f t e ( t ) d t ,
where e ( t ) is the error, and T f represents the simulation time.
We compared the adaptive selective disturbance elimination backstepping controller (ASDE) with the adaptive conditional disturbance negation backstepping controller (ACDN), conditional disturbance negation backstepping controller (CDN), and non-smooth backstepping controller (NBCDC). The results are illustrated in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, and the performance results are presented in Figure 10.
Remark 9.
We have selected follower 4 as a representative example in Scenario 1, while in Scenario 2, follower 5 has been chosen as the illustrative case. The remaining followers demonstrate behavior similar to the examples provided. Figure 2 and Figure 6 illustrate that the DFTO converges within 1.2s, irrespective of the initial states, validating Theorem 1.
Remark 10.
Figure 3 and Figure 7 illustrate that all followers successfully reach convergence in less than 2.3 s, regardless of initial conditions. This observation confirms that the advanced ASDE effectively accomplished consensus tracking in a fixed time. Additionally, these results validate the assertions posited in Theorem 3.
Remark 11.
Figure 4 and Figure 8 indicate that in initial condition (a), the controller designed with ASDE demonstrates superior performance in terms of convergence speed and tracking accuracy when compared to ADCN, DCN, and NBCDC. Additionally, the proposed controller significantly improves control performance through the utilization of approximate control input. Figure 10 presents the ITAE index for each controller in the initial state (b) of two scenarios. Among the four methods evaluated, the ASDE method proposed in this paper achieves the lowest ITAE index, further highlighting its superior performance and demonstrating the effectiveness of ASDE.
Remark 12.
As illustrated in Figure 5 and Figure 9, the disturbance estimation performance of the FTDO is evaluated under two different scenarios. We can observe that the the FTDO is capable of accurately estimating lumped disturbances within a fixed time frame, regardless of the initial conditions, validating Theorem 2. Moreover, the FTDO’s insensitivity to initial conditions underscores its significant advantage in practical applications, particularly in scenarios where initial states are frequently unknown, uncertain, or unpredictable. This robustness ensures reliable performance even in the absence of precise system initialization.
Furthermore, we are surprised to observe that ACDN exhibits the worst tracking performance in Figure 3 and Figure 7. This is because when the tracking error e i j is very close to zero, the disturbance compensation term of ACDN, ( 1 e λ e i j ) s i g n ( d ^ i j e i j ) d ^ i j , approaches zero. At this moment, the system still experiences disturbances ( d ^ i j 0 ) leading to a deterioration in ACDN’s tracking performance. To address this, ASDE introduces an additional term, e λ e i j d ^ i j , into the existing disturbance compensation term of ACDN. When e i j is very close to zero, this term approximates the disturbance compensation term of NBCDC, d ^ i j . As a result, ASDE is expected to exhibit tracking performance similar to that of NBCDC. Additionally, we can observe that when the tracking error crosses zero, the disturbance compensation term of CDN, s i g n ( d ^ i j e i j ) d ^ i j , becomes ineffective. Consequently, in the presence of disturbances, fluctuations occur in the tracking error at the zero-crossing point. In contrast, the disturbance compensation term of ASDE, ( 1 e λ e i j ) s i g n ( d ^ i j e i j ) d ^ i j e λ e i j d ^ i j , ensures that the compensation mechanism remains active even when the tracking error crosses zero, thereby preventing fluctuations at zero-crossing points.

5. Conclusions

This study addresses the issue of fixed-time nonlinear consensus tracking in a specific type of multiagent system (MAS) affected by disturbances. By employing adaptive selective disturbance elimination techniques, we introduce an innovative protocol that ensures consensus tracking within a fixed time while adapting to reject disturbances. The framework incorporates a distributed fixed-time observer (DFTO), a fixed-time disturbance observer (FTDO), and an adaptive selective disturbance elimination backstepping controller (ASDE). Both theoretical analysis and simulation outcomes demonstrate that the newly developed DFTO expedites estimating the output of leader, achieving convergence within a fixed period. Meanwhile, the fixed-time disturbance observer efficiently detects lumped disturbances and their derivatives. The adaptive backstepping controller with selective disturbance compensation effectively differentiates between beneficial and harmful effects, achieving faster convergence and better robustness compared to existing methods. Future work will involve combining the proposed method with other controls that exhibit specific-time convergence characteristics and are less dependent on parameter design, thereby reducing the complexity of parameter tuning in fixed-time control. Additionally, further experimental validation will be carried out to assess the efficacy and real-world applicability of the proposed approach.

Author Contributions

Validation, writing—original draft, G.X.; writing—review and editing, X.T.; writing—editing, G.C.; writing—editing, X.H.; funding acquisition, X.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work acknowledges the funding from the Beijing Natural Science Foundation under Grant L241007, the National Nature Science Foundation of China under Grant 62477045, and the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB0860000.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Olfati-Saber, R.; Murray, R. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar]
  2. Kim, Y.; Mesbahi, M. On maximizing the second smallest eigenvalue of a state-dependent graph laplacian. IEEE Trans. Autom. Control. 2006, 51, 116–120. [Google Scholar]
  3. Zeng, S.; Doan, T.T.; Romberg, J. Finite-time convergence rates of decentralized stochastic approximation with applications in multi-agent and multi-task learning. IEEE Trans. Autom. Control 2023, 68, 2758–2773. [Google Scholar] [CrossRef]
  4. Mi, W.; Luo, L.; Zhong, S. Fixed-time consensus tracking for multi-agent systems with a nonholomonic dynamics. IEEE Trans. Autom. Control 2023, 68, 1161–1168. [Google Scholar]
  5. Wu, W.; Tong, S. Observer-based fixed-time adaptive fuzzy consensus dsc for nonlinear multiagent systems. IEEE Trans. Cybern. 2023, 53, 5881–5891. [Google Scholar] [PubMed]
  6. Hou, H.-Q.; Liu, Y.-J.; Lan, J.; Liu, L. Adaptive fuzzy fixed time time-varying formation control for heterogeneous multiagent systems with full state constraints. IEEE Trans. Fuzzy Syst. 2023, 31, 1152–1162. [Google Scholar]
  7. Guo, W.; Shi, L.; Sun, W.; Jahanshahi, H. Predefined-time average consensus control for heterogeneous nonlinear multi-agent systems. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 2989–2993. [Google Scholar]
  8. Song, Y.; Ye, H.; Lewis, F.L. Prescribed-time control and its latest developments. IEEE Trans. Syst. Man, Cybern. Syst. 2023, 53, 4102–4116. [Google Scholar]
  9. Ren, W.; Beard, R. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar]
  10. Li, Z.; Duan, Z.; Chen, G.; Huang, L. Consensus of multiagent systems and synchronization of complex networks: A unified viewpoint. IEEE Trans. Circuits Syst. I Regul. Pap. 2010, 57, 213–224. [Google Scholar]
  11. Yu, W.; Chen, G.; Cao, M. Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. Automatica 2010, 46, 1089–1095. [Google Scholar]
  12. Ding, Z. Consensus disturbance rejection with disturbance observers. IEEE Trans. Ind. Electron. 2015, 62, 5829–5837. [Google Scholar] [CrossRef]
  13. Polyakov, A. Generalized Homogeneity in Systems and Control; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  14. Polyakov, A. Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Autom. Control 2012, 57, 2106–2110. [Google Scholar]
  15. Polyakov, A.; Efimov, D.; Perruquetti, W. Finite-time and fixed-time stabilization: Implicit lyapunov function approach. Automatica 2015, 51, 332–340. [Google Scholar]
  16. Zuo, Z.; Tie, L. A new class of finite-time nonlinear consensus protocols for multi-agent systems. Int. J. Control 2014, 87, 363–370. [Google Scholar]
  17. Zuo, Z.; Tie, L. Distributed robust finite-time nonlinear consensus protocols for multi-agent systems. Int. J. Syst. Sci. 2016, 47, 1366–1375. [Google Scholar]
  18. Zuo, Z.; Han, Q.-L.; Ning, B.; Ge, X.; Zhang, X.-M. An overview of recent advances in fixed-time cooperative control of multiagent systems. IEEE Trans. Ind. Inform. 2018, 14, 2322–2334. [Google Scholar]
  19. Chen, C.-C.; Sun, Z.-Y. Fixed-time stabilisation for a class of high-order non-linear systems. IET Control. Theory Appl. 2018, 12, 2578–2587. [Google Scholar]
  20. Ning, B.; Han, Q.-L.; Zuo, Z.; Ding, L.; Lu, Q.; Ge, X. Fixed-time and prescribed-time consensus control of multiagent systems and its applications: A survey of recent trends and methodologies. IEEE Trans. Ind. Inform. 2023, 19, 1121–1135. [Google Scholar]
  21. Li, X.; Wen, C.; Wang, J. Lyapunov-based fixed-time stabilization control of quantum systems. J. Autom. Intell. 2022, 1, 100005. [Google Scholar]
  22. Wu, Z.; Ma, M.; Xu, X.; Liu, B.; Yu, Z. Predefined-time parameter estimation via modified dynamic regressor extension and mixing. J. Frankl. Inst. 2021, 358, 6897–6921. [Google Scholar] [CrossRef]
  23. Andrieu, V.; Praly, L.; Astolfi, A. Homogeneous approximation, recursive observer design, and output feedback. SIAM J. Control. Optim. 2008, 47, 1814–1850. [Google Scholar] [CrossRef]
  24. Parsegov, S.; Polyakov, A.; Shcherbakov, P. Fixed-time consensus algorithm for multi-agent systems with integrator dynamics. IFAC Proc. Vol. 2013, 46, 110–115. [Google Scholar] [CrossRef]
  25. Zuo, Z. Nonsingular fixed-time consensus tracking for second-order multi-agent networks. Automatica 2015, 54, 305–309. [Google Scholar] [CrossRef]
  26. Fu, J.; Wang, J. Fixed-time coordinated tracking for second-order multi-agent systems with bounded input uncertainties. Syst. Control. Lett. 2016, 93, 1–12. [Google Scholar] [CrossRef]
  27. Afifa, R.; Ali, S.; Pervaiz, M.; Iqbal, J. Adaptive backstepping integral sliding mode control of a mimo separately excited dc motor. Robotics 2023, 12, 105. [Google Scholar] [CrossRef]
  28. Liu, Y.; Zhang, F.; Huang, P.; Lu, Y. Fixed-Time Consensus Tracking for Second-Order Multiagent Systems Under Disturbance. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 4883–4894. [Google Scholar]
  29. Wang, J.; Chen, L.; Xu, Q. Disturbance estimation-based robust model predictive position tracking control for magnetic levitation system. IEEE/ASME Trans. Mechatronics 2022, 27, 81–92. [Google Scholar]
  30. Wang, F.; He, L. Fpga-based predictive speed control for pmsm system using integral sliding-mode disturbance observer. IEEE Trans. Ind. Electron. 2021, 68, 972–981. [Google Scholar] [CrossRef]
  31. Xu, W.; Junejo, A.K.; Liu, Y.; Islam, M.R. Improved continuous fast terminal sliding mode control with extended state observer for speed regulation of pmsm drive system. IEEE Trans. Veh. Technol. 2019, 68, 10465–10476. [Google Scholar] [CrossRef]
  32. Li, J.; Zhang, L.; Luo, L.; Li, S. Extended state observer based current-constrained controller for a pmsm system in presence of disturbances: Design, analysis and experiments. Control Eng. Pract. 2023, 132, 105412. [Google Scholar]
  33. Yuan, X.; Zuo, Y.; Fan, Y.; Lee, C.H.T. Model-free predictive current control of spmsm drives using extended state observer. IEEE Trans. Ind. Electron. 2022, 69, 6540–6550. [Google Scholar]
  34. Yang, J.; Liu, X.; Sun, J.; Li, S. Sampled-data robust visual servoing control for moving target tracking of an inertially stabilized platform with a measurement delay. Automatica 2022, 137, 110105. [Google Scholar]
  35. Xu, B.; Zhang, L.; Ji, W. Improved non-singular fast terminal sliding mode control with disturbance observer for pmsm drives. IEEE Trans. Transp. Electrif. 2021, 7, 2753–2762. [Google Scholar]
  36. Wang, F.; He, L.; Rodríguez, J. A robust predictive speed control for spmsm systems using a sliding mode gradient descent disturbance observer. IEEE Trans. Energy Convers. 2023, 38, 540–549. [Google Scholar] [CrossRef]
  37. Hou, Q.; Ding, S. Gpio based super-twisting sliding mode control for pmsm. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 747–751. [Google Scholar] [CrossRef]
  38. Tan, X.; Hu, C.; Cao, G.; Wei, Q.; Li, W.; Han, B. Fixed-time antidisturbance consensus tracking for nonlinear multiagent systems with matching and mismatching disturbances. IEEE/CAA J. Autom. Sin. 2024, 11, 1410–1423. [Google Scholar] [CrossRef]
  39. Pu, Z.; Sun, J.; Yi, J.; Gao, Z. On the principle and applications of conditional disturbance negation. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 6757–6767. [Google Scholar]
  40. Sun, J.; Pu, Z.; Yi, J. Conditional disturbance negation based active disturbance rejection control for hypersonic vehicles. Control. Eng. Pract. 2019, 84, 159–171. [Google Scholar]
  41. Ren, C.; Jiang, H.; Mu, C.; Ma, S. Conditional disturbance negation based control for an omnidirectional mobile robot: An energy perspective. IEEE Robot. Autom. Lett. 2022, 7, 11641–11648. [Google Scholar]
  42. Kong, S.; Sun, J.; Wang, J.; Zhou, Z.; Shao, J.; Yu, J. Piecewise compensation model predictive governor combined with conditional disturbance negation for underactuated auv tracking control. IEEE Trans. Ind. Electron. 2023, 70, 6191–6200. [Google Scholar] [CrossRef]
  43. Sun, J.; Xu, S.; Ding, S.; Pu, Z.; Yi, J. Adaptive conditional disturbance negation-based nonsmooth-integral control for pmsm drive system. IEEE/ASME Trans. Mechatronics 2024, 29, 3602–3613. [Google Scholar] [CrossRef]
  44. Sun, J.; Pu, Z.; Yi, J.; Liu, Z. Fixed-time control with uncertainty and measurement noise suppression for hypersonic vehicles via augmented sliding mode observers. IEEE Trans. Ind. Inform. 2020, 16, 1192–1203. [Google Scholar] [CrossRef]
  45. Qian, C.; Lin, W. A continuous feedback approach to global strong stabilization of nonlinear systems. IEEE Trans. Autom. Control 2001, 46, 1061–1079. [Google Scholar] [CrossRef]
  46. Li, Y.; Tong, S.; Liu, Y.; Li, T. Adaptive fuzzy robust output feedback control of nonlinear systems with unknown dead zones based on a small-gain approach. IEEE Trans. Fuzzy Syst. 2014, 22, 164–176. [Google Scholar] [CrossRef]
Figure 1. Communication topology of MASs.
Figure 1. Communication topology of MASs.
Electronics 14 01503 g001
Figure 2. Scenario 1: leader output estimation under different initial states. ((i): initial state (a), (ii): initial state (b)).
Figure 2. Scenario 1: leader output estimation under different initial states. ((i): initial state (a), (ii): initial state (b)).
Electronics 14 01503 g002
Figure 3. Scenario 1: leader output tracking with different initial states. ((i): initial state (a), (ii): initial state (b)).
Figure 3. Scenario 1: leader output tracking with different initial states. ((i): initial state (a), (ii): initial state (b)).
Electronics 14 01503 g003
Figure 4. Scenario 1: tracking errors (i) and control inputs (ii) for the fourth follower.
Figure 4. Scenario 1: tracking errors (i) and control inputs (ii) for the fourth follower.
Electronics 14 01503 g004
Figure 5. Scenario 1: disturbance observation for the fourth follower. ((i): estimation of disturbance d 41 , (ii): estimation of disturbance d 42 , (iii): estimation of disturbance d 43 ).
Figure 5. Scenario 1: disturbance observation for the fourth follower. ((i): estimation of disturbance d 41 , (ii): estimation of disturbance d 42 , (iii): estimation of disturbance d 43 ).
Electronics 14 01503 g005
Figure 6. Scenario 2: leader output estimation for different initial states. ((i): initial state (a), (ii): initial state (b)).
Figure 6. Scenario 2: leader output estimation for different initial states. ((i): initial state (a), (ii): initial state (b)).
Electronics 14 01503 g006
Figure 7. Scenario 2: tracking of leader output with different initial states. ((i): initial state (a), (ii): initial state (b)).
Figure 7. Scenario 2: tracking of leader output with different initial states. ((i): initial state (a), (ii): initial state (b)).
Electronics 14 01503 g007
Figure 8. Scenario 2: tracking errors (i) and control inputs (ii) for the fifth follower.
Figure 8. Scenario 2: tracking errors (i) and control inputs (ii) for the fifth follower.
Electronics 14 01503 g008
Figure 9. Scenario 2: disturbance observation for the fourth follower. ((i): estimation of disturbance d 51 , (ii): estimation of disturbance d 52 , (iii): estimation of disturbance d 53 ).
Figure 9. Scenario 2: disturbance observation for the fourth follower. ((i): estimation of disturbance d 51 , (ii): estimation of disturbance d 52 , (iii): estimation of disturbance d 53 ).
Electronics 14 01503 g009
Figure 10. Comparative analysis of performance indexes for initial state (b). ((i): ITAE index of followers in scenario 1, (ii): ITAE index of followers in scenario 2).
Figure 10. Comparative analysis of performance indexes for initial state (b). ((i): ITAE index of followers in scenario 1, (ii): ITAE index of followers in scenario 2).
Electronics 14 01503 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, G.; Tan, X.; Cao, G.; Hong, X. Adaptive Selective Disturbance Elimination-Based Fixed-Time Consensus Tracking for a Class of Nonlinear Multiagent Systems. Electronics 2025, 14, 1503. https://doi.org/10.3390/electronics14081503

AMA Style

Xiong G, Tan X, Cao G, Hong X. Adaptive Selective Disturbance Elimination-Based Fixed-Time Consensus Tracking for a Class of Nonlinear Multiagent Systems. Electronics. 2025; 14(8):1503. https://doi.org/10.3390/electronics14081503

Chicago/Turabian Style

Xiong, Guanghuan, Xiangmin Tan, Guanzhen Cao, and Xingkui Hong. 2025. "Adaptive Selective Disturbance Elimination-Based Fixed-Time Consensus Tracking for a Class of Nonlinear Multiagent Systems" Electronics 14, no. 8: 1503. https://doi.org/10.3390/electronics14081503

APA Style

Xiong, G., Tan, X., Cao, G., & Hong, X. (2025). Adaptive Selective Disturbance Elimination-Based Fixed-Time Consensus Tracking for a Class of Nonlinear Multiagent Systems. Electronics, 14(8), 1503. https://doi.org/10.3390/electronics14081503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop