Next Article in Journal
Status and Perspectives of the X(1750)
Previous Article in Journal
MTH-Net: A Mamba–Transformer Hybrid Network for Remote Sensing Image Change Captioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time Tracking Control of Multi-Agent System with External Disturbance

1
School of General Education, Wenzhou Polytechnic, Wenzhou 325035, China
2
Department of Automation, Zhejiang University of Technology, Hangzhou 310023, China
3
Institute of Intelligent Systems and Decision, Wenzhou University, Wenzhou 325027, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(12), 2061; https://doi.org/10.3390/sym17122061
Submission received: 27 September 2025 / Revised: 2 November 2025 / Accepted: 18 November 2025 / Published: 2 December 2025
(This article belongs to the Topic Recent Trends in Nonlinear, Chaotic and Complex Systems)

Abstract

This paper focuses on addressing the finite-time tracking problem of a class of multi-agent systems (MASs) under specific perturbations. Each agent is modeled with general linear dynamics, and an undirected interconnected communication topology is assumed among the agents. A key technical innovation lies in the proposed adaptive rule, which dynamically tunes two critical elements: the parameters of the nonlinear compensation term in the reference input and the coupling weights between neighboring agents. By integrating this adaptive mechanism into a sliding mode control framework, rigorous theoretical analysis confirms that under suitable conditions, the controlled system can converge to an innovative desired switching surface within finite time, thereby further guaranteeing the MASs achieve finite-time bounded (FTB) tracking. The convergence time is elaborated in detail in the subsequent proof. Finally, numerical examples are presented to experimentally substantiate the effectiveness of the proposed approach.

1. Introduction

The cooperative MAS has become increasingly prevalent in real-world applications, spanning synchronization [1,2,3]—such as a flock of birds adjusting their wing beats to fly in formation, multi-player games [4]—like robots collaborating to sort packages in a warehouse, and tracking control [5]—for example, multiple drones following a lead drone to inspect power lines. Consensus, the core issue in distributed cooperative control for MASs, refers to the ability of all agents to reach agreement on a shared state (e.g., position, velocity, or task goal) through only local communication. It is analogous to a team of fire fighters coordinating their positions to put out a fire—no central commander tells every fire fighter what to do; each only communicates with nearby teammates. This distributed nature gives consensus remarkable robustness and scalability, making it a focus of extensive research.
In the simplest scenario, research on MAS consensus began with first-order systems [6]. These systems only require agents to align one state to reach consensus—like a group of people walking in the same direction without worrying about speed. Later, second-order MASs gained more attention, as they involve two interconnected states: agents must agree on both position and velocity to achieve consensus [7]. A typical example is self-driving cars in a platoon—they need to stay at the right distance (position) and move at the same speed to avoid collisions. However, the consensus problem for high-order MASs—where agents have three or more linked states, such as Unmanned Aerial Vehicles (UAVs) coordinating position, speed, and altitude simultaneously—remains an open challenge, with existing literature only providing partial solutions, as discussed in [8].
Uncertainties, caused by external disturbances and imperfect modeling, often degrade MAS performance. External disturbances could be wind gusts disrupting a drone’s flight, while imperfect modeling might arise from slight differences in motor performance between two “identical” robots. Designing a control strategy to mitigate these uncertainties—similar to how a hiker adjusts their steps to stay on the trail despite uneven ground or strong winds—is thus a critical yet challenging task in control engineering. To ensure consensus under such uncertainties, robust consensus protocols are essential. For instance, Li et al. [9] studied the distributed consensus problem for linear MASs with heterogeneous matching uncertainties, covering both leaderless scenarios and cases where a leader has bounded unknown control inputs. For nonholonomic mobile robots—like wheeled robots that can not move sideways—with parametric uncertainties and underactuated dynamics, a novel distributed iterative learning control approach was proposed to handle time-varying reference tracking [10]. Additionally, a systematic controller was developed for heterogeneous fractional-order MASs (FO-MASs) with interval uncertainties using Lyapunov-based Linear Matrix Inequality (LMI) conditions [11].
We first analyze existing literature, which mostly relies on static consensus frameworks requiring complete state information from neighboring agents. However, in practical systems, physical limitations and cost constraints often prevent accurate real-time state observation. For example, an underwater robot cannot directly measure its exact depth in murky water, and equipping every robot in a large swarm with expensive high-precision sensors is unfeasible. This necessitates sophisticated observer design—observers act as “smart guessers”, using measurable data (e.g., a robot’s propeller speed or a neighbor’s observed position) to estimate unmeasurable states (e.g., exact depth or velocity). Hong et al. [7] and Gao et al. [12] proposed observer-dependent consensus protocols for first- and second-order MASs, respectively. To track an active leader, Hong et al. [7] designed both neighbor-dependent controllers and state estimation rules for each agent, while Gao et al. [12] developed distributed control and estimation schemes for second-order followers to track an accelerating leader. For MASs with general linear dynamics, various observer-based consensus protocols have been proposed [13,14,15], using different design methods to achieve distributed coordination.
However, most of these studies share two key limitations. First, they assume the leader’s reference input is either non-existent or accessible to all followers—an unrealistic assumption for large networks, like a swarm of 100 drones where it is impractical to send the leader’s input to every single drone. Second, the design of coupling parameters in their consensus protocols depends on the smallest real part of non-zero Laplacian eigenvalues (a global topological property). Calculating these eigenvalues for large-scale networks (e.g., 1000 agents) is computationally expensive, even if the entire topology is known.
Another critical performance metric for consensus protocols is convergence rate—how fast agents can reach agreement. Many practical applications require rapid convergence: for example, in emergency rescue, a team of robots must coordinate their search paths within 5 min to find survivors. Saber and Murray [16] showed that the convergence rate of linear consensus algorithms is fundamentally limited by the smallest real part of non-zero Laplacian eigenvalues. This finding has spurred research on network topology optimization, with a focus on maximizing “algebraic connectivity” (related to those eigenvalues) to speed up convergence [17]. However, while maximizing algebraic connectivity improves convergence rate, it only guarantees “asymptotic agreement”—meaning agents will eventually reach consensus, but we cannot specify an exact deadline (e.g., they might take 10 s or 10 min). In practice, “finite-time consensus” (agreeing by a specific time) is often needed, requiring approaches beyond spectral optimization.
Recently, finite-time consensus (FTC) of MASs has been widely studied [18,19,20]. Li et al. [18] addressed finite-time consensus for uncertain Lagrangian MASs (e.g., robotic arms) under directed topologies using barrier Lyapunov functions (to constrain errors) and fuzzy systems (to approximate uncertainties). Sharifi et al. [19] extended this work to handle time-varying communication delays and parameter uncertainties. However, both studies focus on 2-dimensional state feedback (position and velocity), limiting their application to high-order MASs. Li et al. [20] proposed an adaptive distributed finite-time observer for heterogeneous leader–follower MASs, enabling finite-time estimation of the leader’s state, but like [18,19], it assumes all agents’ state information is known—a rare scenario in practice, where sensors often cannot measure all states directly.
Building on these works, this paper proposes a distributed adaptive observer for high-order MASs to address the above limitations, with three key contributions as follows.
First, it breaks low-order model constraints by designing an innovative adaptive observer architecture that integrates neighbor-dependent output measurements via a distributed estimation protocol. Unlike traditional first- or second-order observers, which only handle position or velocity information, this observer is applicable to high-order MASs—such as unmanned aerial vehicles (UAVs) that require the monitoring of position, speed, and altitude—and further enhances disturbance rejection capability, with its stability rigorously verified through Lyapunov analysis.
Second, it eliminates global information dependency by developing a fully distributed control framework that enables consensus without relying on global parameters like Laplacian eigenvalues. This framework adaptively estimates unknown reference inputs in real time, adjusts coupling weights based on local topology, and only requires neighbor-to-neighbor communication, making it well-suited for large-scale network applications.
Third, it guarantees explicit convergence time. Different from existing Fault-Tolerant Control results that lack clear convergence time bounds, this paper adopts a constructive Lyapunov approach to derive sufficient conditions, ensuring all system trajectories converge to the equilibrium neighborhood within a prescribed time. It also quantitatively characterizes the convergence time T s as a function of initial conditions and system parameters, providing a clear deadline that supports practical application deployment.
This study investigates the FTC problem of MASs with external disturbances. A novel adaptive sliding mode control scheme is proposed, which ensures that the system trajectories converge to the preset “switching surface” (a core concept of sliding mode control) within finite time while achieving robust disturbance rejection during the consensus process. The remainder of this paper is structured as follows: Section 2 introduces preliminary knowledge and problem formulation, Section 3 elaborates on the design process of the finite-time sliding mode controller, Section 4 provides numerical examples, and Section 5 summarizes the research work.

2. Preliminaries and Problem Statement

2.1. Notations

The notations used in this paper are standard, and the detailed definitions of the symbols can be found in Table 1.
In a well-considered MAS, a directed graph G with N nodes is utilized to depict the communication topology of the MAS, where V = { v 1 , v 2 , , v N } denotes the vertex set and E V × V denotes the edge set. The neighbor set N i of node i represents all indices j such that ( v j , v i ) E (note: aligned with standard directed graph definitions, where ( v j , v i ) E indicates node j is a neighbor of node i). The weighted adjacency matrix W = [ w i j ] N × N characterizes the graph G , where w i j > 0 if ( v j , v i ) E and w i j = 0 otherwise.
The degree matrix of G is a diagonal matrix G = diag ( g 1 , g 2 , , g N ) R N × N with diagonal elements g i = j N i w i j , and the Laplacian matrix of G is defined as L = G W .

2.2. Preliminary Results

The MAS under investigation consists of N follower agents and a single leader agent. The dynamics of each follower agent are described as follows:
x ˙ i ( t ) = A x i ( t ) + B u i ( t ) + F ω i ( t ) , y i ( t ) = C x i ( t ) , i = 1 , 2 , , N , ω ˙ i ( t ) = Φ ω i ( t ) ,
where x i R n , u i R m , y i R p and ω i ( t ) R r denote the state vector, control input, measured output, and exogenous disturbance of one of the known waveforms of agent i, respectively. The system matrices, A , B , C , F , and Φ are known real matrices .
The dynamics of the leader are governed by
x ˙ 0 ( t ) = A x 0 ( t ) + B u 0 ( t ) , y 0 ( t ) = C x 0 ( t ) ,
where x 0 R n and y 0 R p denote the leader state’s vector and measurable reference output, respectively. The exogenous reference input u 0 R m remains unavailable to follower agents. By assuming the unknown control input, u 0 R m can be parameterized as follows:
u 0 ( t ) = Γ 0 T θ 0 ,
where Γ 0 T R m × q is a known basis function matrix, and  θ 0 R q is an unknown constant parameter vector to be estimated.
Remark 1.
As noted in [21], the system matrices for all agents and the leader are assumed to be identical. This case has practical relevance in biological systems, such as flocks of birds or schools of fish.
Definition 1
(Finite-Time Coordinated Tracking). For each follower i, i = 1 , 2 , , N in system (1), design a distributed control law using only local information, such that, for any initial condition x i ( t 0 ) R n , i = 0 , 1 , 2 , , N , there exists a finite settling time T ( x 0 ( t 0 ) , x 1 ( t 0 ) , ,   x N ( t 0 ) ) > 0 ensuring x i ( t ) = x 0 ( t ) for t t 0 + T .
As Definition 1 focuses on finite-time coordinated tracking (a finite-time-related performance index), we first clarify that the abbreviation FTB (which will be used in Definition 2) refers to Finite-Time Boundedness—a fundamental property that describes whether the state of a dynamic system can be constrained within a predefined bounded region over a specified finite time horizon.
Definition 2
([22,23]). Given positive constant c 1 , c 2 ( > c 1 ) , δ, a positive symmetric matrix R and a fix time interval [ 0 , T ] , the continuous-time nonlinear system
x ˙ ( t ) = f ( x ( t ) , ω ( t ) ) + B u ( t ) , ω ˙ ( t ) = Φ ω ( t ) ,
is said to be robustly FTB with respect to ( c 1 , c 2 , T , R , δ ) if 
x T ( 0 ) R x ( 0 ) c 1 , ω T ( 0 ) ω ( 0 ) δ , x T ( t ) R x ( t ) < c 2 , t [ 0 , T ] .
Remark 2.
The FTB property is inherently sensitive to the choice of critical scalars. The parameter T defines the admissible time horizon for system operation, while c 1 and δ must adhere to the constraints in (5) ([24]). Notably, c 2 rigorously quantifies the transient-state bound, ensuring compliance with performance requirements [25,26].
The analysis of the system dynamics (1) and (2) requires the following technical assumptions.
Assumption 1.
( A , B ) is stabilizable, ( A , C ) is detectable, and rank ( B ) = m , rank ( C ) = p .
Assumption 2.
The graph G ¯ is undirected and has a directed spanning tree with root v 0 .
Assumption 3.
The state of the leader is available to its neighbors only.
The following results have been widely acknowledged for their theoretical and practical significance and will serve as foundational components for our subsequent analysis.
Lemma 1
([27]). Consider asymmetric matrix S expressed in the partitioned form S = [ S i j ] , where S 11 is an r × r matrix over the real numbers, S 12 is an r × ( n r ) matrix, S 21 is an ( n r ) × r matrix, and  S 22 is an ( n r ) × ( n r ) matrix. The necessary and sufficient conditions for S < 0 (negative definiteness) are that one of the following two sets of conditions holds:
1. 
S 11 < 0 and S 22 S 21 S 11 1 S 12 < 0 ,
2. 
S 22 < 0 and S 11 S 12 S 22 1 S 21 < 0 .
Remark 3.
This lemma serves as a core tool for determining the negative definiteness of block symmetric matrices. It avoids the complex operation of directly calculating all eigenvalues of the matrix, making it particularly suitable for stability analysis of high-dimensional systems (e.g., the Lyapunov stability proof for the multi-agent system in this paper). In the proofs of Theorems 1 and 2 in this paper, this lemma is used to verify the negative definiteness of the block matrix corresponding to the derivative of the Lyapunov function, thereby ensuring the stability of the error system. It acts as a key link between “matrix design” and “stability conclusions”.
Lemma 2
([27]). Let Ω R n × n be a Hermitian matrix (real symmetric matrices are a special case of Hermitian matrices). Then, there exist constants λ min ( Ω ) (minimum eigenvalue) and λ max ( Ω ) (maximum eigenvalue) such that for any vector x R n , λ min ( Ω ) x 2 x T Ω x λ max ( Ω ) x 2 , which is equivalent to λ min ( Ω ) I Ω λ max ( Ω ) I .
Lemma 3
([27]). Let a , b 0 be non-negative real numbers and  p , q > 1 be real numbers satisfying 1 p + 1 q = 1 . Then, the inequality a b a p p + b q q h o l d s .
Lemma 4
([28]). Given a symmetric positive definite matrix P R n × n , for any vectors x , y R n , the following inequality holds:
2 x T y x T P x + y T P 1 y , x , y R n .
Remark 4.
This lemma extends Young’s inequality to the matrix domain. By choosing an appropriate positive definite matrix P, it provides a flexible way to bound cross terms in matrix quadratic forms, which is crucial for stability analysis of linear systems with multiple state variables.
Lemma 5
([29]). The finite-time stability of a dynamic system can be characterized using the rapid terminal sliding mode (TSM) technique based on Lyapunov theory. Suppose there exists a Lyapunov function V ( x ) > 0 (for x 0 ) such that its derivative satisfies
V ˙ ( x ) + λ 1 V ( x ) + λ 2 V a ( x ) 0
where λ 1 > 0 , λ 2 > 0 , and  0 < a < 1 . Then, the system is finite-time stable, and the settling time can be explicitly calculated as follows:
T reach 1 λ 1 ( 1 a ) ln λ 1 V 1 a ( x 0 ) + λ 2 λ 2
where V ( x 0 ) is the initial value of the Lyapunov function at x = x 0 .
Remark 5.
This lemma is the core theoretical basis for “quantifying finite-time convergence” in this paper. Unlike asymptotic stability (which only guarantees convergence as time approaches infinity), it provides an explicit upper bound for the settling time, which is critical for practical applications requiring rapid response (e.g., UAV swarm formation control).
Lemma 6
([30]).
(i) 
Let η ( . ) be a non-negative, absolutely continuous function defined on the interval [ 0 , T ] , and satisfy the differential inequality for almost every t [ 0 , T ] .
η ˙ ( t ) ϕ ( t ) η ( t ) + ψ ( t ) ,
where ϕ ( t ) and ψ ( t ) are non-negative, summable functions on [ 0 , T ] . Then, for all 0 t T , the  following inequality holds:
η ( t ) e 0 t ϕ ( s ) d s [ η ( 0 ) + 0 t ψ ( s ) d s ] ,
for all 0 t T .
(ii) 
In particular, if 
η ˙ ϕ η o n [ 0 , T ] a n d η ( 0 ) = 0 ,
then
η 0 o n [ 0 , T ] .
Remark 6.
This lemma is a classic tool for “bounding time-varying functions” in differential equations. It is widely used to analyze the boundedness or convergence of solutions to linear time-varying systems, especially in the presence of external disturbances or uncertain terms.

3. Finite-Time Boundedness Analysis

3.1. Adaptive Distributed Finite-Time Observer

Throughout this work, we maintain the standing assumption that only the output (rather than the full state) of each agent is directly measurable. However, the leader state x 0 remains globally accessible to all its neighboring follower agents. We subsequently designed a distributed observer for each agent i, enabling local state estimation through neighbor-accessible information. Since θ 0 is unavailable to individual agents, we develop a decentralized observer where each agent i employs local estimates θ i and u ^ i to approximate θ 0 and u 0 , respectively. This formulation yields the control expression in (11).
x ^ ˙ i ( t ) = A x ^ i ( t ) + B u ^ i ( t ) m i ( t ) G C ε ^ i ( t ) d 1 s i g n ( P 1 ε ^ i ) = A x ^ i ( t ) + B Γ 0 T θ i ( t ) m i ( t ) G C ε ^ i ( t ) d 1 s i g n ( P 1 ε ^ i ) ,
As a positive definite symmetric matrix, P 1 serves as the core to ensure the stability of the adaptive finite-time observer. To achieve this, it must strictly satisfy the LMI given in (12).
A T P 1 + P 1 A C T C + I < 0
Here, A (state matrix) and C (output matrix) are known inherent parameters of the linear dynamic model for each agent in the multi-agent system, providing the basis for constructing the observer’s stability condition. m i ( t ) denotes the time-varying coupling strength in the observer, which adjusts dynamically according to the communication interactions and estimation errors between adjacent agents. The positive constant gain d 1 is incorporated into the observer dynamics (11), ensuring the observer’s state converges to the leader’s state in finite time. The neighborhood estimation error is denoted by ε ^ i .
ε ^ i ( t ) = j N i w i j ( x ^ i ( t ) x ^ j ( t ) ) + g i ( x ^ i ( t ) x 0 ( t ) ) = j N i w i j [ ( x ^ i ( t ) x 0 ( t ) ) ( x ^ j ( t ) x 0 ( t ) ) ] + g i ( x ^ i ( t ) x 0 ( t ) ) .
In our problem, the weights w i j and g i are selected as outlined below.
w i j = α i j , if agent i is connected to agent j , 0 , otherwise ,
g i = β i , if agent i is connected to the leader , 0 , otherwise ,
where α i j > 0 (for i , j = 1 , 2 , , N ) and β i > 0 (for i = 1 , 2 , , N ) represent the connection weight constant between agent i and agent j, as well as the leader, respectively. Here, we formulated the adaptive parameter design approach as follows:
m ˙ i ( t ) = ξ i 2 [ j N i w i j ( y ^ i ( t ) y ^ j ( t ) ) + g i ( y ^ i ( t ) y 0 ( t ) ) ] T × [ j N i w i j ( y ^ i ( t ) y ^ j ( t ) ) + g i ( y ^ i ( t ) y 0 ( t ) ) ] ,
θ ˙ i ( t ) = τ i Γ 0 B T P 1 [ j N i w i j ( x ^ i x ^ j ( t ) ) + g i ( x ^ i ( t ) x 0 ( t ) ) ] .
Let e ^ i be defined as x ^ i x 0 . Utilizing Equations (2) and (11), we can derive the observer error e ^ i ( i = 1 , 2 , , N ), which is defined as follows:
e ^ ˙ i ( t ) = x ^ ˙ i ( t ) x ˙ 0 ( t ) = A e ^ i ( t ) + B Γ 0 T ( θ i ( t ) θ 0 ) m i ( t ) G C ε ^ i ( t ) d 1 s i g n [ P 1 ε ^ i ( t ) ] .
Denote the vector e ^ = [ e ^ 1 T , e ^ 2 T , , e ^ N T ] T , the diagonal matrix M ( t ) = d i a g { m 1 ( t ) , m 2 ( t ) , , m N ( t ) } , and the θ = [ θ 1 T , θ 2 T , , θ N T ] T . Subsequently, the following observer error dynamics, which are controlled by the parameter adaptive laws (16) and (17), can be driven.
e ^ ˙ = [ ( I N A ) M ( t ) H ( G C ) ] e ^ + [ I N ( B Γ 0 T ) ] ( θ 1 θ 0 ) d 1 ( I N I ) s i g n [ ( H P 1 ) e ^ ] ,
Herein, we present our primary findings as outlined below.
Theorem 1.
Consider the MAS defined by Equations (1) and (2), with an undirected and connected interaction topology G ¯ . Suppose Assumption 1 (stabilizability of ( A , B ) , detectability of ( A , C ) , and  rank ( B ) = m ) holds, and the gain matrix G satisfies
G = 1 2 P 1 1 C T
where P 1 > 0 is a symmetric positive definite matrix satisfying the LMI in Equation (12). Then, the estimated state x ^ i of each follower agent i converges to the leader’s state x 0 in finite time.
Proof. 
Taking into account the subsequent Lyapunov function
V = V 1 + V 2 + V 3 .
The expression for V 1 is given by V 1 = e ^ T ( H P 1 ) e ^ . Additionally, V 2 is defined as V 2 = i = 1 N ( m i ( t ) m 0 ) 2 ξ i , where m 0 represents a positive constant that will be specified later. Furthermore, V 3 is expressed as V 3 = i = 1 N ( θ i θ 0 ) T ( θ i θ 0 ) τ i .
By utilizing Equations (16), (17) and (19), the derivative of (21) can be derived as follows:
V ˙ = e ^ T [ H ( P 1 A + A T P 1 ) ] e ^ 2 e ^ T [ ( H M ( t ) H ) ( P 1 G C ) ] e ^ + 2 e ^ T [ H ( P 1 B Γ 0 T ) ] ( θ 1 θ 0 ) 2 d 1 e ^ T [ H P 1 ] s i g n [ ( H P 1 ) e ^ ] + e ^ T [ H ( M ( t ) m 0 I ) H ( C T C ) ] e ^ 2 ( θ 1 θ 0 ) T [ H ( Γ 0 B T P 1 ) ] e ^ .
By substituting G = 1 2 P 1 1 C T into (22), we can derive the desired result.
V ˙ = e ^ T [ H ( P 1 A + A T P 1 ) ] e ^ e ^ T [ ( m 0 H 2 ) ( C T C ) ] e ^ 2 d 1 e ^ T [ H P 1 ] s i g n [ ( H P 1 ) e ^ ] .
Given that H is a symmetric matrix, there necessarily exists a unitary matrix U that satisfies the condition U H U T = Λ = d i a g { λ 1 , λ 2 , , λ N } , and  λ i represents the i-th eigenvalue of matrix H. Additionally, defining the variable e ¯ as ( U I n ) e ^ , Equation (23) can be reformulated as follows:
V ˙ = e ¯ T [ Λ ( P 1 A + A T P 1 ) e ¯ e ¯ T m 0 [ Λ 2 ( C T C ) ] e ¯ 2 d 1 e ^ T [ H P 1 ] s i g n [ ( H P 1 ) e ^ ] = i = 1 N e ¯ i T λ i ( P 1 A + A T P 1 m 0 λ i C T C ) e ¯ i 2 d 1 e ^ T [ H P 1 ] s i g n [ ( H P 1 ) e ^ ] .
For a sufficiently large value of m 0 such that m 0 λ m i n ( H ) 1 , it is straightforward to deduce that P 1 A + A T P 1 m 0 λ i C T C is strictly less than I .
On the flip side,
2 d 1 e ^ T ( H P 1 ) s i g n [ ( H P 1 ) e ^ ] = 2 d 1 e ^ T ( H P 1 ) 1 < 2 d 1 e ^ T ( H P 1 ) 2 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 .
Then, we can get
V ˙ < 1 λ m i n ( P 1 ) V 1 2 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 .
Hence, it follows that V 0 and V ˙ 0 , ensuring that the error e ^ , the difference c i c 0 , and the deviation θ i θ 0 all remain within bounded limits. Proceeding further, we obtain the following:
V ˙ 1 = V ˙ V ˙ 2 V ˙ 3 1 λ m a x ( P 1 ) V 1 2 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 i = 1 N ( m i ( t ) m 0 ) e ^ i T ( C H ) T ( C H ) e ^ i + i = 1 N ( θ i ( t ) θ 0 ) Γ 0 B T P 1 ε ^ i .
Given that e ^ is bounded, it follows that we can select a positive value m 0 such that m ˜ = max { m 0 m i i = 1 , 2 , , N } 0 and θ ˜ = max { | θ 0 θ i | i = 1 , 2 , , N } 0 .
V ˙ 1 1 λ m a x ( P 1 ) V 1 2 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 + m ˜ λ m a x ( H 2 ) λ m a x ( C T C ) λ m i n ( H ) λ m i n ( P ) V 1 + ( θ 1 θ 0 ) Γ 0 B T λ m a x ( H ) λ m a x ( P 1 ) λ m i n ( H ) V 1 1 2 1 λ m a x ( P 1 ) V 1 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 + ( d 1 λ m i n ( H ) λ m i n ( P 1 ) + m ˜ h 1 V 1 1 2 + h 2 ) V 1 1 2 ,
where h 1 = λ m a x ( H 2 ) λ m a x ( C T C ) λ m i n ( H ) λ m i n ( P 1 ) , and  h 2 = θ ˜ Γ 0 B T λ m a x ( H ) λ m a x ( P 1 ) λ m i n ( H ) .
Given that V is bounded, it follows that V 1 is also bounded. Consequently, it is possible to choose a sufficiently large d 1 > 0 such that the subsequent inequality holds, where the specific range of d 1 will be explicitly given in the following context:
d 1 λ m i n ( H ) λ m i n ( P 1 ) + m ˜ h 1 V 1 1 2 + h 2 0 ,
which yields
V ˙ 1 1 λ m a x ( P 1 ) V 1 d 1 λ m i n ( H ) λ m i n ( P 1 ) V 1 1 2 .
It is worth noting that V 1 resides within the interval ( 0 , e ^ T ( 0 ) ( H P 1 ) e ^ ( 0 ) ] , thereby enabling us to choose a suitably large d 1 that satisfies the given condition.
d 1 > m ˜ h 1 [ e ^ T ( 0 ) ( H P 1 ) e ^ ( 0 ) ] 1 2 + h 2 λ m i n ( H ) λ m i n ( P 1 ) .
Therefore, from Lemma 5, there exists
T 1 = 2 λ m a x ( P 1 ) ln V 1 1 2 ( e ^ 0 ) λ m a x ( P 1 ) + d 1 λ m i n ( H ) λ m i n ( P 1 ) d 1 λ m i n ( H ) λ m i n ( P 1 ) ,
the observer error e ^ ( t ) = 0 , t > T 1 .    □
To translate Theorem 1 (Adaptive Distributed Finite-Time Observer Theorem) into actionable steps, the following algorithm (Algorithm 1) details how to compute each follower’s state estimate x ^ i and ensure its finite-time convergence to the leader’s state x 0 , starting from matrix initialization to convergence verification.
Algorithm 1 Adaptive Distributed Finite-Time Observer (for Theorem 1)
  • Objective: Obtain the converged state estimate x ^ i of each follower agent and convergence time T 1 .
    Input: Agent output y i , leader output y 0 , adjacency matrix W, connection weight g i , initial m i ( 0 ) , ξ i , τ i .
    i. Solve the LMI A T P 1 + P 1 A C T C + I < 0 (The matrix P 1 is solved via the MATLAB R2018b, and the corresponding calculation is implemented through self-written programming scripts.) to get the positive definite matrix P 1 , then calculate the observer gain G = 1 2 P 1 1 C T .
    ii. Set the control gain d 1 to satisfy d 1 > m ˜ h 1 V 1 1 / 2 ( 0 ) + h 2 λ m i n ( H ) λ m i n ( P 1 ) , where H = L + G d (extended Laplacian) and V 1 = e ^ T ( H P 1 ) e ^ .
    iii. For each agent i, compute the neighbor estimation error ε ^ i = j N i w i j ( x ^ i x ^ j ) + g i ( x ^ i x 0 ) .
    iv. Update adaptive parameters: m ˙ i ( t ) = ξ i 2 j N i w i j ( y ^ i y ^ j ) + g i ( y ^ i y 0 ) 2 and θ ˙ i ( t ) = τ i Γ 0 B T P 1 ε ^ i .
    v. Integrate the observer dynamic equation x ^ ˙ i = A x ^ i + B Γ 0 T θ i ( t ) m i ( t ) G C ε ^ i d 1 s i g n ( P 1 ε ^ i ) to get x ^ i ( t ) .
    vi. Check if e ^ i ( t )   =   x ^ i x 0   < δ (convergence precision); if yes, record the convergence time T 1 and stop.
    Output: Converged x ^ i , convergence time T 1 .
Remark 7.
The proposed approach in Theorem 1 offers two significant advantages. First, the gain matrices G are independent of the interaction topology, providing substantial design flexibility. Second, the fully distributed adaptive protocol (11) ensures both scalability in large networks and robustness against topological changes. Notably, while the gain matrices G remain topology-independent (ensuring design flexibility), the interaction topology itself may influence consensus convergence properties. To guarantee consensus convergence, the interaction topology must satisfy Assumption 1.
Remark 8.
Practical Significance: The explicit expression of T 1 provides a quantitative basis for engineering applications. For example, in UAV swarm formation control, T 1 can be used to pre-design the time required for each UAV to “lock” the leader’s state, ensuring the timeliness of formation adjustments.

3.2. Special Case: The Reference Input Is Known

Accordingly, we will consider a special scenario: where all follower agents gain awareness of u 0 . Under this condition, the distributed protocol for each proxy agent i will be correspondingly modified as follows:
x ^ ˙ i ( t ) = A x ^ i ( t ) + B u 0 m i ( t ) G C ε ^ i ( t ) d 1 s i g n ( P 1 ε ^ i ) .
The observer error for each agent i is characterized by the error variable e ^ i , defined as follows:
e ^ ˙ i ( t ) = x ^ ˙ i ( t ) x ˙ 0 ( t ) = A e ^ i ( t ) m i ( t ) G C ε ^ i d 1 s i g n ( P 1 ε ^ i ) .
Applying the same variable substitution as introduced in Section 3.1, we derive the following observer error dynamics:
e ^ ˙ = ( I N A ) e ^ M ( t ) ( H G C ) e ^ d 1 ( I N I ) s i g n [ ( H P 1 ) e ^ ] .
The conclusion follows directly from the parameter adaptive law (16). The proof is omitted, as it mirrors that of Theorem 1.
Corollary 1.
Consider the MAS described by Equations (1) and (2), with an undirected and connected interaction topology G ¯ . Under Assumption 1 and with gain matrices G satisfying (20), the estimated leader state x ^ i converges to x 0 in a finite time.

3.3. Finite-Time Tracking Consensus

3.3.1. Finite-Time Boundedness Analysis

This section investigates the finite-time tracking consensus problem, with particular emphasis on the case when t > T 1 . The tracking error e i for each agent i ( i = 1 , 2 , , N ) is defined as follows:
e ˙ i ( t ) = x ˙ i ( t ) x ˙ 0 ( t ) = A x i ( t ) + B u i ( t ) A x 0 ( t ) B u 0 + F ω i ( t ) = A e i ( t ) + B u i ( t ) B u 0 ( t ) + F ω i ( t ) .
Clearly, the leader-following multi-agent system achieves finite-time consensus if e i ( t ) = 0 for  t T 1 + T 2 . Therefore, the finite-time consensus problem under consideration can be reduced to analyzing the finite-time stability (FTS) of the error dynamics system (34).
Inspired by the foundational contributions of [31] in sliding mode control, we propose a novel integral-based switching surface design:
s i ( t ) = L e i ( t ) T 1 t L ( A + B K ) e i ( τ ) d τ .
The matrices L R m × n and K R m × n remain to be designed. Notably, the selection of matrix L requires the nonsingularity of the product L B . By choosing L = B T X where X > 0 , the nonsingularity of L B is guaranteed, consistent with the results in  [31].
When the closed-loop system reaches the sliding surface ( s i ( t ) = c and s i ˙ = 0 ), the equivalent control law u e q i can be derived.
u e q i = K e i + u 0 ( L B ) 1 L F ω i .
From (17), we obtain the control expression u ^ i ( t ) = Γ 0 T θ i ( T 1 ) for all t T 1 . Since the reference control input u 0 is unavailable to agent i, we formulate the distributed control law as follows:
u i = K e i + Γ 0 T θ i ( T 1 ) ( L B ) 1 L F ω i .
Substituting the equivalent control law (37) into (34), we derive the sliding mode dynamics:
e ˙ i ( t ) = ( A + B K ) e i ( t ) + B Γ 0 T ( θ i ( T 1 ) θ 0 ) + [ I B ( L B ) 1 L ] F ω i ( t ) , ω ˙ i ( t ) = Φ ω i ( t ) .
The following theorem guarantees that the sliding mode dynamics (38) associated with the predefined switching surface function (35) is FTB.
Theorem 2.
Consider the MAS with an undirected and connected interaction topology G ¯ , and suppose Assumption 1 holds. The sliding mode dynamics (Equation (38)) associated with the predefined switching surface s i ( t ) (Equation (35)) are FTB with respect to ( c 1 , c 2 , T , R , δ ) if there exist positive scalars μ 1 , μ 2 and symmetric positive definite matrices P 2 , P 3 such that the following LMIs are satisfied:
Ω 1 [ ( I B ( L B ) 1 L ] F P 3 1 Ω 2 < 0 ,
μ 1 R 1 P 2 1 0 0 0 P 2 1 2 R 1 0 0 0 P 3 1 μ 2 I < 0 ,
e α ( T T 1 ) 2 c 1 δ c 1 μ 1 0 δ 0 μ 2 < 0 ,
where Ω 1 = P 2 1 A T + A P 2 1 B B T α P 2 1 , Ω 2 = P 3 1 Φ T + Φ P 3 1 α P 3 1 , and the gain matrices K must fulfill the subsequent condition.
K = B T P 2 .
Proof. 
Assume that the following inequalities hold:
e T ( T 1 ) ( I N R ) e ( T 1 ) c 1 , ω T ( T 1 ) ( I N I ) ω ( T 1 ) δ .
Our goal is to derive the conditions ensuring
e T ( t ) ( I N R ) e ( t ) c 2 , t [ T 1 , T ] .
To this end, we introduce the following Lyapunov function candidate:
V ( e , ω ) = e T ( t ) ( I N P 2 ) e ( t ) + ω T ( t ) ( I N P 3 ) ω ( t ) .
Subsequently, the time derivative of V ( e , ω ) along the trajectories of the resulting sliding node (38) is given by
V ˙ ( e , ω ) = e T ( I N P 2 ) e ˙ + e ˙ T ( I N P 2 ) e + ω T ( I N P 3 ) ω ˙ + ω ˙ T ( I N P 3 ) ω = e T { I N [ ( A + B K ) T P 2 + P 2 ( A + B K ) ] } e + ω T { I N F T [ I B ( L B ) 1 L ] T P 2 } e + e T { I N P 2 [ I B ( L B ) 1 L ] F } ω + 2 e T ( I N P 2 B Γ 0 T ) ( θ ( T 1 ) 1 θ 0 ) + ω T [ I N ( Φ T P 3 + P 3 Φ ) ] ω = e T [ I N ( A T P 2 + P 2 A 2 P 2 B B T P 2 ) ] e + ω T { I N F T [ I B ( L B ) 1 L ] T P 2 } e + e T { I N P 2 [ I B ( L B ) 1 L ] F } ω + 2 e T ( I N P 2 B Γ 0 T ) ( θ ( T 1 ) 1 θ 0 ) + ω T [ I N ( Φ T P 3 + P 3 Φ ) ] ω .
Next, we introduce the auxiliary function:
J = V ˙ ( e , ω ) α V ( e , ω ) = J 1 + 2 e T ( I N P 2 B Γ 0 T ) ( θ ( T 1 ) 1 θ 0 ) .
By applying (43) and (44), we obtain
J 1 = e ω T Ω 11 I N { P 2 [ I B ( L B ) 1 L ] F } Ω 22 e ω ,
where Ω 11 = I N ( A T P 2 + P 2 A 2 P 2 B B T P 2 α P 2 ) , Ω 22 = I N ( Φ T P 3 + P 3 Φ α P 3 ) .
Based on Lemma 4, we obtain the following conclusion:
2 e T ( I N P 2 B Γ 0 T ) ( θ ( T 1 ) 1 θ 0 ) e T [ I N ( P 2 B B T P 2 ) ] e + μ ,
where μ = ( θ ( T 1 ) 1 θ 0 ) 2 Γ 0 T Γ 0 .
Using the preceding results, we may reformulate (45) as:
J = V ˙ ( e , ω ) α V ( e , ω ) < J 2 + μ ,
where
J 2 = e ω T Ω ¯ 11 I N { P 2 [ I B ( L B ) 1 L ] F } Ω 22 e ω ,
where Ω ¯ 11 = I N ( A T P 2 + P 2 A P 2 B B T P 2 α P 2 ) .
By pre- and post-multiplying the matrix Ω ¯ 11 I N { P 2 [ I B ( L B ) 1 L ] F } Ω 22 with the diagonal matrix d i a g ( I N P 2 1 , I N P 3 1 ) and utilizing condition (39), we can deduce that J 2 < 0 .
Combining Lemma 6 with the differential inequality V ˙ ( e , ω ) < α V ( e , ω ) + μ , we derive
V ( t ) e α ( t T 1 ) [ V ( T 1 ) + μ ( t T 1 ) ] ,
for all 0 t [ T 1 , T ] .
By applying the inequality e T ( I N P 2 ) e < e T ( I N P 2 ) e + ω T ( I N P 3 ) ω in conjunction with (50), we derive
e T ( I N P 2 ) e < e α ( T T 1 ) [ e T ( T 1 ) ( I N P 2 ) e ( T 1 ) + ω T ( T 1 ) ( I N P 3 ) ω ( T 1 ) + μ ( T T 1 ) ] .
Applying Rayleigh’s equality, we obtain the following inequality:
λ m i n ( R 1 2 P 2 R 1 2 ) e T ( I N R ) e < e α ( T T 1 ) [ λ m i n ( R 1 2 P 2 R 1 2 ) e T ( T 1 ) ( I N R ) e ( T 1 ) + λ m a x ( P 3 ) ω T ( T 1 ) ( I N I ) ω ( T 1 ) + μ ( T T 1 ) ] .
Then, we proceed by establishing the following inequality:
e T ( I N R ) e < [ e α ( T T 1 ) [ λ m i n ( R 1 2 P 2 R 1 2 ) e T ( T 1 ) ( I N R ) e ( T 1 ) + λ m a x ( P 3 ) ω T ( T 1 ) ( I N I ) ω ( T 1 ) + μ ( T T 1 ) ] ] / λ m i n ( R 1 2 P 2 R 1 2 ) < c 2 .
Taking into account (40), we can deduce that
1 2 I < R 1 2 P 2 R 1 2 < 1 μ 1 I , 0 < P 3 < 1 μ 2 I .
Thus, we have
λ m i n ( R 1 2 P 2 R 1 2 ) > 1 2 , λ m a x ( R 1 2 P 2 R 1 2 ) < 1 μ 1 , λ m a x ( P 3 ) < 1 μ 2 .
In view the established results, we formulate the following sufficient condition guaranteeing (53).
c 1 μ 1 + δ μ 2 + μ ( T T 1 ) < e α ( T T 1 ) 2 c 2 .
Obviously,
c 1 μ 1 + δ μ 2 < e α ( T T 1 ) 2 c 2 .
By applying Schur’s complement lemma on (56), the LMI (41) can be derived.    □
Remark 9.
FTB means that if the initial state satisfies x T ( 0 ) R x ( 0 ) c 1 and the initial disturbance satisfies ω T ( 0 ) ω ( 0 ) δ , then the state x ( t ) will remain within the bound x T ( t ) R x ( t ) < c 2 for all t [ 0 , T ] . This is different from FTC (which requires x ( t ) 0 in finite time) and is more suitable for systems with persistent external disturbances.
To obtain the matrices P 2 , P 3 and the control gain K, the following algorithm (Algorithm 2 for FTB verification of sliding mode dynamics) is proposed:
Algorithm 2 FTB Verification for Sliding Mode Dynamics (for Theorem 2)
  • Objective: Confirm the sliding mode dynamics satisfy FTB with respect to ( c 1 , c 2 , T , R , δ ).
    Input: Converged x ^ i (from Algorithm 1), system matrices A , B , F , disturbance ω i ( t ) , LMI parameters μ 1 , μ 2 .
    i. Define the tracking error e i = x i x 0 and its derivative e ˙ i = A e i + B u i B u 0 + F ω i ; design the integral switching surface s i ( t ) = L e i ( t ) T 1 t L ( A + B K ) e i ( τ ) d τ (set L = B T X , X > 0 to ensure the nonsingularity of L B ).
    ii. Solve three LMIs from Theorem 2:
    iii. Derive the equivalent control u e q i = K e i + u 0 ( L B ) 1 L F ω i when s i = c and s ˙ i = 0 .
    iv. Use the Lyapunov function V ( e , ω ) = e T ( I N P 2 ) e + ω T ( I N P 3 ) ω to verify V ˙ < α V + μ , ensuring e T ( I N R ) e < c 2 for t [ T 1 , T ] .
    Output: FTB confirmation, matrices P 2 , P 3 , control gain K.

3.3.2. Finite-Time Tracking Protocol Design

A sliding mode controller is systematically designed to: (i) drive the system trajectories of (38) to the prescribed switching mainfold s ( t ) = s ( x ( t ) ) = c in finite time T 2 and (ii) maintain sliding motion on the surface thereafter.
Given that x ^ i = x 0 for all t > T 1 , the distributed tracking protocol is modified as follows:
u i ( t ) = u ^ i + K ( x i ( t ) x ^ i ( t ) ) d 2 ( t ) s i g n ( s i ( t ) ) = Γ 0 T θ i ( T 1 ) + K ( x i ( t ) x 0 ( t ) ) d 2 ( t ) s i g n ( s i ( t ) ) ,
where d 2 ( t ) represents a time-dependent function expressed in the following form:
d 2 ( t ) = η   +   ( B T P 2 1 B ) 1 B T P 2 1 F ω ( t ) + Γ 0 T θ ˜ ,
with a small positive constant η .
We present our principal findings through the following methodological framework:
Theorem 3.
Given the multi-agent tracking system described by (34), where the interaction topology G ¯ is undirected and connected, and  under the fulfillment of Assumptions 1, the following stability properties hold: Through deliberate matrix design ensuring L B in (35) remains nonsingular, and by implementing the sliding mode control (SMC) strategy prescribed in (57), the tracking error (38) can be guaranteed to converge to a bounded region within finite time under these specified conditions.
Proof. 
Denoting the vector e = [ e 1 T , e 2 T , , e N T ] T and the vector ω T = [ ω 1 T , ω 2 T , , ω N T ] T , error dynamics can be rewritten as
e ˙ = [ I N ( A + B K ) ] e + [ I N B Γ 0 T ] ( θ ( T 1 ) 1 θ 0 ) d 2 ( t ) ( I N B ) s i g n ( s ( t ) ) + ( I N F ) ω .
Since s i ˙ ( t ) = L e ˙ i ( t ) L ( A + B K ) e i ( t ) and s T = [ s 1 T , s 2 T , , s N T ] T , then we have
s ˙ ( t ) = ( I N L ) e ˙ ( t ) [ I N L ( A + B K ) ] e ( t ) = [ I N L B Γ 0 T ] [ θ ( T 1 ) 1 θ 0 ] d 2 ( t ) ( [ I N L B ] s i g n ( s ( t ) ) + ( I N L F ) ω .
Let us assume that L is selected as L = B T P 2 1 , where P 2 is a positive definite matrix satisfying P 2 > 0 . This ensures that L B , which is expressed as B T P 2 1 B , is nonsingular. Then, the following Lyapunov function is considered:
V ¯ ( t ) = 1 2 s T ( t ) [ I N ( B T P 2 1 B ) 1 ] s ( t ) .
By taking the derivative of V ¯ ( t ) , we can obtain
V ¯ ˙ ( t ) = s T ( t ) [ I N ( B T P 2 1 B ) 1 ] [ d 2 ( t ) ( I N L B ) s i g n ( s ( t ) ] + s T ( t ) [ I N ( B T P 2 1 B ) 1 B T P 2 1 F ω ( t ) ] + s T [ I N ( B T P 2 1 B ) 1 B T P 2 1 B Γ 0 T ] [ θ ( T 1 ) 1 θ 0 ] ,
from (58), we can get
V ¯ ˙ = s T [ I N ( B T P 2 1 B ) 1 ] × [ η + ( B T P 1 1 B ) 1 B T P 1 1 F ω ( t ) + Γ 0 T θ ˜ ] × [ ( I N B T P 2 1 B ) s i g n ( s ) ] + s T [ I N ( B T P 2 1 B ) 1 B T P 2 1 F ω ] + s T [ I N ( B T P 2 1 B ) 1 B T P 2 1 B Γ 0 T ] [ θ ( T 1 ) 1 θ 0 ] η s 1 η s < η λ m a x ( ( B T P 2 1 B ) 1 ) V ¯ 1 2 < 0 .
Moreover, the analysis in Section 3.3.1 reveals that there exists a positive scalar ξ > 0 such that s ( t )   =   s ( e ( t ) )   < ξ holds for all t [ T 1 , T ] . This conclusion is further supported by the bounded error term, as  e ( t )   d holds for all t [ T 1 , T ] . Applying Rayleigh’s inequality to (61) yields the following inference:
V ¯   λ m a x [ ( B T P 2 1 B ) 1 ] 2 s ( t ) 2 ξ ˜ ,
where ξ ˜ = ( λ m a x [ ( B T P 2 1 B ) 1 ] 2 ) ξ 2 .
Under the ideal sliding motion assumption of the FTB-based SMC design, the results in (63) and (64) demonstrate the existence of a constant c and a finite time
T = 2 s ( T 1 ) λ m a x ( ( B T P 2 1 B ) 1 ) η
such that t T , s ( t ) s ( e ( t ) ) c , where T quantifies the worst-case convergence time dependent on the initial sliding surface deviation and system dynamics.    □
To obtain the convergence time T , the following algorithm (Algorithm 3) is proposed:
Algorithm 3 Finite-Time Tracking Control (for Theorem 3)
  • Objective: Drive the tracking error e i to a bounded region in finite time.
    Input: x ^ i (from Algorithm 1), P 2 (from Algorithm 2), disturbance bound ω ( t ) , θ ˜ = θ ( T 1 ) 1 θ 0 .
    i. Design the distributed sliding mode control law:  u i ( t ) = Γ 0 T θ i ( T 1 ) + K ( x i x 0 ) d 2 ( t ) s i g n ( s i ( t ) ) , where d 2 ( t ) = η + ( B T P 2 1 B ) 1 B T P 2 1 F ω ( t ) + Γ 0 T θ ˜ ( η > 0 is a small constant).
    ii. Compute the sliding surface derivative s ˙ ( t ) = ( I N L ) e ˙ ( t ) [ I N L ( A + B K ) ] e ( t ) , substituting e ˙ ( t ) from the error dynamics e ˙ = [ I N ( A + B K ) ] e + [ I N B Γ 0 T ] ( θ ( T 1 ) 1 θ 0 ) d 2 ( t ) ( I N B ) s i g n ( s ( t ) ) + ( I N F ) ω .
    iii. Use the Lyapunov function V ¯ ( t ) = 1 2 s T [ I N ( B T P 2 1 B ) 1 ] s to prove V ¯ ˙ η s < 0 , confirming s ( t ) converges to the sliding surface.
    iv. Calculate the total convergence time T = 2 s ( T 1 ) λ m a x ( ( B T P 2 1 B ) 1 ) η .
    Output: Control input u i ( t ) , total convergence time T , bounded e i ( t ) .
Remark 10.
This theorem adopts a “two-stage design”: (a) Before T 1 : The adaptive observer (from Theorem 1) ensures x ^ i x 0 in finite time; (b) After T 1 : The sliding mode controller drives the tracking error e i = x i x 0 to a bounded region. This two-stage design decomposes the complex “finite-time tracking” objective into manageable sub-tasks, improving the rigor of the stability analysis.
Remark 11.
The time-varying gain d 2 ( t ) in the SMC law explicitly includes the bound of the external disturbance ω ( t ) and the parameter estimation error θ ˜ . This “disturbance compensation” design ensures that the sliding mode motion is not destroyed by disturbances, thereby guaranteeing the robustness of the system.
Remark 12.
Network Scale, Global Performance, and Local Agent Behavior: For large-scale MASs, growing node count (N) and edge count ( E ) impact global performance but barely affect local agent behavior: Global impacts: More nodes/edges widen disturbance propagation paths (reducing global stability margins) and lengthen information propagation chains (increasing global finite-time consensus time), as more state discrepancies need reconciliation. Local invariance: If each agent’s neighbor count (k) stays fixed (e.g., 3–5 nearest neighbors in sparse topologies), its local computational load and finite-time convergence time remain unchanged—since its control only depends on local (not global) state information.This balance makes the strategy suitable for large-scale MASs where local agent responsiveness is key.
Remark 13.
The proposed finite-time sliding mode control strategy provides a scalable solution for MASs, but its practical application and further optimization must address two core tensions—these also point to key directions for future work:
First, the tension between robustness and certainty: While the strategy resists bounded disturbances, real MASs (e.g., UAV swarms in windy environments) face unbounded, time-varying disturbances (e.g., sudden gusts). Future work should integrate adaptive observers to dynamically track disturbance bounds, avoiding conservatism from “worst-case” static assumptions.
Second, the tension between topology stability and mobility: Fixed-topology analysis simplifies theoretical proofs, but mobile MASs (e.g., autonomous vehicle platoons) have dynamically switching edges. Subsequent research needs to link topology switching frequency to sliding mode surface persistence—ensuring global consensus even when individual agent connections are intermittent.
Notably, resolving these tensions does not require abandoning the core sliding mode mechanism—instead, it relies on synergies with adaptive control, graph theory, and hardware optimization to expand the strategy’s practical scope.

4. Numerical Example

This section provides numerical validation of the theoretical results established in Section 3. The agent ensemble consists of N = 4 identical agents, each modeled by the linear system in (1) with the following system matrices:
A = 1 0 0 0.1 0.1 2 0 0.2 0 0 1 0 0 0 0 4 , B = 0.5 0 1 0 , C = 1 0.1 0.25 0 0.1 0.1 0.1 0 , F = 0.1 1.1 0.1 0.1 .
The interconnection topology is assumed to be undirected and connected, as illustrated in Figure 1.
The Laplacian matrix L of graph G is defined as follows:
L = 2 1 1 0 1 2 0 1 1 0 2 1 0 1 1 2 .
And the leader–follower interconnection topology is encoded by a diagonal matrix:
D = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 .
By resolving Inequality (12), we obtain the following:
P 1 = 0.7975 0.0903 0.0957 0.0206 0.2020 0.6201 0.0280 0.1508 0.0956 0.0089 1.1655 0.0026 0.0460 0.3474 0.0026 0.3246 .
Furthermore, the gain matrices G is designed as follows:
G = 0.6297 0.0518 0.2149 0.0746 0.1565 0.0394 0.3205 0.0869 .
Consider an external perturbation characterized by
ω i ( t ) = sin ( t ) , i = 1 , 2 , , n
and define the basis function matrix as
Γ 0 T = sin ( t ) cos ( t )
All parameters ξ i are uniformly set to ξ i = 3 , while the time constants τ i are assigned as τ i = 1 . The initial value is randomly determined. The system achieves consensus with rigorously bounded error:
x ^ i j x 0 j   < 0.1 , t 0.32
where x ^ i j denotes the j t h state estimate of follower i, and x 0 j represents the leader’s corresponding state.
Figure 2 demonstrates the finite-time convergence of the observer states x ^ i j to the leader’s state x 0 j under the adaptive consensus protocol (11). The trajectories confirm that all follower agents achieve effective tracking within the theoretically predicted time horizon.
By successfully resolving Inequality (39), we derive the following matrix P 2 :
P 2 = 0.5799 0.0679 0.0399 0.1496 0.0679 0.6657 0.0674 0.2879 0.0399 0.0674 0.5800 0.1508 0.1496 0.2879 0.1508 1.1788 .
Moreover, the gain matrix K is constructed as follows:
K = 0.2500 0.0335 0.5600 0.0760
Numerical results demonstrate that the leader–follower state discrepancy x i j x 0 j stabilizes below the 0.1 threshold after 89.39 s of system operation.
Figure 3 shows the trajectories of x i j and x 0 j in the MAS, confirming that tracking boundedness is achieved in finite time under the constraints of (58) and (59).

5. Conclusions

This paper addresses the finite-time tracking control problem of linear MASs with external disturbances under undirected communication topologies. An adaptive state observer, leveraging neighboring agents’ output information, is designed for accurate follower state estimation; when integrated with sliding mode control, the method rigorously ensures FTB tracking of the leader, enabling rapid motion convergence critical for formation control and emergency response scenarios. Its practicality is verified in applications like disturbed UAV swarm recovery, warehouse Automated Mobile Robot (AMR) obstacle handling, and multi-Unmanned Surface Vehicle Vehicle (USV) patrol re-formation, with potential extensions to power regulation and emergency material scheduling. Compared with existing finite-time MAS control works [18,19,20], this study offers three key advantages: first, it extends beyond the Lagrangian nonlinear limitations of [18,19] to adapt to general linear MASs and explicit external disturbances; second, it abandons the full state information or global topology parameter reliance of [18,19,20], realizing fully distributed control via local neighbor data; third, unlike [18,19,20] which only confirm finite-time convergence, it provides quantifiable FTB bounds to ensure transient safety. Future research will focus on distributed control for heterogeneous MASs, exploring finite-time strategies for directed topologies, switching communication structures, and complex unknown disturbances.

Author Contributions

Conceptualization, L.G.; writing—original draft, X.X.; software design, Y.G.; validation, L.G.; system simulation, M.X.; data planning and management, X.W.; project administration, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 62173188.

Data Availability Statement

This study primarily focuses on theoretical analysis, and only a minimal amount of data was used. The data employed in the simulations are original system parameters that were not adapted from other sources. All relevant data supporting the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to express our sincere gratitude to the National Natural Science Foundation of China for providing financial support for this research. We also appreciate the valuable suggestions and constructive comments from all members of the research team during the manuscript preparation process.

Conflicts of Interest

The authors declare no conflicts of interest. The sponsors had no role in the design, execution, interpretation, or writing of the study.

References

  1. Derakhshannia, M.; Moosapour, S. Adaptive arbitrary time synchronisation control for fractional order chaotic systems with external disturbances. Int. J. Syst. Sci. 2024, 56, 1540–1560. [Google Scholar] [CrossRef]
  2. Munoz-Pacheco, J.; Volos, C.; Serrano, F.; Jafari, S.; Kengne, J.; Rajagopal, K. Stabilization and synchronization of a complex hidden attractor chaotic system by backstepping technique. Entroy 2021, 23, 921. [Google Scholar] [CrossRef] [PubMed]
  3. Pham, V.; Kingni, S.; Volos, C.; Jafari, S.; Kapitaniak, T. A simple three-dimensional fractional-order chaotic system without equilibrium: Dynamics, circuitry implementation, chaos control and synchronization. AEU Int. J. Electron. Commun. 2017, 78, 220–227. [Google Scholar] [CrossRef]
  4. Li, C.; Zhang, W.; Yang, B.; Yee, H. A multi-player game equilibrium problem based on stochastic variational inequalities. AIMS Math. 2024, 9, 26035–26048. [Google Scholar] [CrossRef]
  5. Liu, D.; Xiong, Z.; Liu, Z.; Li, M.; Zhou, S.; Li, J.; Liu, X.; Zhou, X. Trajectory tracking closed-loop cooperative control of manipulator neural network and terminal sliding model. Symmtry 2025, 17, 1319. [Google Scholar] [CrossRef]
  6. Ren, W.; Beard, R. Consensus seeking in multi-agent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  7. Hong, Y.; Hu, J.; Gao, L. Tracking control for multi-agent consensus with an active leader and variable topology. Automatica 2006, 42, 1177–1182. [Google Scholar] [CrossRef]
  8. Li, Z.; Duan, Z.; Chen, G.; Hang, L. Consensus of multi-agent systmes and synchronization of complex networks: A unified viewpoint. IEEE Trans. Autom. Control 2010, 57, 213–224. [Google Scholar]
  9. Li, Z.; Duan, Z.; Lewis, F. Distributed robust consensus control of multi-agent systems with heterogeneous matching uncertainties. Automatica 2014, 50, 883–889. [Google Scholar] [CrossRef]
  10. Ye, X.; Wen, B.; Zhang, H.; Xue, F. Leader-following consensus control of multiple nonholomomic mobile robots: An iterative learning adaptive control scheme. J. Frankl. Institue 2022, 359, 1018–1040. [Google Scholar] [CrossRef]
  11. Bahrampour, B.; Asemani, M.H.; Dehghani, M.; Tavazoei, M. Consensus control of incommensurate fractional-order multi-agent systems: An LMI approach. J. Frankl. Institue 2023, 360, 4031–4055. [Google Scholar] [CrossRef]
  12. Gao, L.; Zhu, X.; Chen, W. Leader-following consensus problem with an accelerated motion leader. Int. J. Control. Autom. Syst. 2012, 10, 931–939. [Google Scholar] [CrossRef]
  13. Li, Z.; Li, X.; Lin, P.; Ren, W. Consensus of linear multi-agent systems with reduced-order observer-based protocols. Syst. Control Lett. 2011, 60, 510–516. [Google Scholar] [CrossRef]
  14. Xu, X.; Chen, S.; Huang, W.; Gao, L. Leader-following consensus of discrete-time multi-agent systems with observer-based protocols. Neurocomputing 2013, 118, 334–341. [Google Scholar] [CrossRef]
  15. Xu, X.; Gao, L. Intermittent observer-based consensus control for multi-agent systems with switching topologies. Int. J. Syst. Sci. 2016, 47, 1891–1904. [Google Scholar] [CrossRef]
  16. Saber, R.; Murray, R. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef]
  17. Xiao, L.; Boyd, S. Fast linear iterations for distributed averaging. IEEE Trans. Autom. Control 2004, 53, 65–78. [Google Scholar] [CrossRef]
  18. Li, X.; Luo, X.; Wand, J.; Guan, X. Finite-time consensus of nonlinear multi-agent system with precribed performance. Nonlinear Dyn. 2018, 91, 2397–2409. [Google Scholar] [CrossRef]
  19. Sharifi, M.; Yazdanpanah, M. Finite time consensus of nonlinear multi-agent systems in the presence of communication time delays. Eur. J. Control 2020, 53, 10–19. [Google Scholar] [CrossRef]
  20. Li, Z.; Mazouchi, M.; Modares, H.; Wand, X. Finite-time adaptive output synchronization of uncertain nonlinear heterogeneous multi-agent systems. Int. J. Robust Nonlinear Control 2021, 31, 9416–9435. [Google Scholar] [CrossRef]
  21. Ni, W.; Cheng, D. Leader-following consensus of multi-agent systems under fixed and switching topologies. Syst. Control Lett. 2010, 50, 209–217. [Google Scholar] [CrossRef]
  22. EIBsat, M.; Yaz, E. Robust and resilient finite-time control of a class of continuous-time nonlinear systems. IFAC Proc. Vol. 2012, 45, 331–352. [Google Scholar]
  23. EIBsat, M.; Yaz, E. Robust and resilient finite-time bounded control of a class of discrete-time uncertain nonlinear systems. Automatica 2013, 49, 2292–2296. [Google Scholar] [CrossRef]
  24. Song, J.; He, S. Observer-based finite-time passive control for a class of uncertain time-delayed Lipschitz nonlinear systems. Transations Inst. Meas. Control 2014, 36, 797–804. [Google Scholar] [CrossRef]
  25. Yan, Z.; Zhang, G.; Zhang, W. Finite-time stability and stabilization of Iinear It ô stochastic systems with state and control dependent noise. Asian J. Control 2013, 15, 270–281. [Google Scholar] [CrossRef]
  26. Amato, F.; Ariola, M.; Dorato, P. Finite-time control of linear systems subject to parametric uncertainties and disturbances. Automatica 2011, 37, 1459–1463. [Google Scholar] [CrossRef]
  27. Bernstein, D. Matrix Mathematics: Theory, Facts, and Formulas; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  28. Jiang, B.; Wang, J.; Soh, Y. An adaptive technique for robust diagnosis of faults with independent effects on system outputs. Int. J. Control 2002, 75, 792–802. [Google Scholar] [CrossRef]
  29. Yu, S.; Yu, X.; Shirinzadeh, B.; Man, Z. Continuous finite-time control for robotic manipulators with terminal sliding mode. Automatica 2005, 41, 1957–1964. [Google Scholar] [CrossRef]
  30. Lawrence, C. Partial Differential Equations, 2nd ed.; American Mathematican Society: Providence, RI, USA, 2010. [Google Scholar]
  31. Niu, Y.; Ho, D.; Lam, J. Robust integral sliding mode control for uncertain stochastic systems with time-varying delay. Automatica 2005, 41, 873–880. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the topological structure of the multi-agent system.
Figure 1. Graphical representation of the topological structure of the multi-agent system.
Symmetry 17 02061 g001
Figure 2. The paths traced by x ^ i j and x 0 j , respectively.
Figure 2. The paths traced by x ^ i j and x 0 j , respectively.
Symmetry 17 02061 g002
Figure 3. The paths traced by x i j and x 0 j , respectively.
Figure 3. The paths traced by x i j and x 0 j , respectively.
Symmetry 17 02061 g003
Table 1. List of symbols in this paper.
Table 1. List of symbols in this paper.
SymbolDescriptionApplication Scenario
R / C Represent the set of real numbers and complex numbers, respectively.Defining the numerical domain of variables or matrix elements.
IIdentity matrix with dimensions compatible with other matrices in the operation.Matrix operations (e.g., matrix multiplication, inverse matrix calculation).
S > 0 (where S is a symmetric matrix)Indicates that S is a positive definite matrix, meaning all eigenvalues of S are positive real numbers.Determination of matrix properties (e.g., analysis of quadratic form positive definiteness).
S < 0 (where S is a symmetric matrix)Indicates that S is a negative definite matrix, meaning all eigenvalues of S are negative real numbers.Determination of matrix properties.
λ max ( S ) / λ min ( S ) (where all eigenvalues of S are real numbers)Represent the maximum eigenvalue and minimum eigenvalue of matrix S, respectively.Eigenvalue analysis (e.g., matrix stability, spectral radius calculation).
( · ) T / ( · ) H Denote the transpose and conjugate transpose of a matrix or vector, respectively.Matrix/vector operations (e.g., inner product calculation, definition of Hermitian matrices).
diag ( · ) Generates a diagonal matrix (elements on the diagonal are those inside the parentheses, and off-diagonal elements are 0).Matrix construction (e.g., constructing diagonalized matrices, sign matrices).
Kronecker product, which satisfies two properties: 1. ( A B ) ( C D ) = ( A C ) ( B D ) 2. If  A 0 and B 0 , then A B 0 .High-dimensional matrix operations (e.g., tensor product, block matrix construction).
sign ( e i ) Sign matrix, defined as diag { sign ( e i 1 ) , sign ( e i 2 ) , , sign ( e i s ) } , where sign ( · ) is the sign function (1 for positive numbers, −1 for negative numbers, and 0 for zero).Matrix construction related to vector signs (e.g., error sign analysis).
· / · 1 Represent the Euclidean norm (2-norm) and 1-norm, respectively.Norm calculation of vectors or matrices (e.g., error measurement, convergence analysis).
Used in symmetric block matrices to represent elements below the main diagonal (symmetric to the corresponding elements above the main diagonal).Simplified representation of symmetric block matrices (avoiding redundant writing of symmetric elements).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, X.; Gui, Y.; Xue, M.; Wang, X.; Gao, L. Finite-Time Tracking Control of Multi-Agent System with External Disturbance. Symmetry 2025, 17, 2061. https://doi.org/10.3390/sym17122061

AMA Style

Xu X, Gui Y, Xue M, Wang X, Gao L. Finite-Time Tracking Control of Multi-Agent System with External Disturbance. Symmetry. 2025; 17(12):2061. https://doi.org/10.3390/sym17122061

Chicago/Turabian Style

Xu, Xiaole, Yalin Gui, Mengqiu Xue, Xincheng Wang, and Lixin Gao. 2025. "Finite-Time Tracking Control of Multi-Agent System with External Disturbance" Symmetry 17, no. 12: 2061. https://doi.org/10.3390/sym17122061

APA Style

Xu, X., Gui, Y., Xue, M., Wang, X., & Gao, L. (2025). Finite-Time Tracking Control of Multi-Agent System with External Disturbance. Symmetry, 17(12), 2061. https://doi.org/10.3390/sym17122061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop