Next Article in Journal
Prescribed-Time, Event-Triggered, Adaptive, Fault-Tolerant Formation Control of Heterogeneous Air–Ground Multi-Agent Systems Under Deception Attacks and Actuator Faults
Next Article in Special Issue
A Fan-Array Robotic-Arm Approach to Characterization of Pitch-Rate Dynamics of a Flapping-Wing MAV
Previous Article in Journal
Hysteresis Modeling of a Magnetic Shape Memory Alloy Actuator Using a NARMAX Model and a Long Short-Term Memory Neural Network
Previous Article in Special Issue
Application of Adaptive Discrete Feedforward Controller in Multi-Axial Real-Time Hybrid Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Practical Prescribed-Time Trajectory Tracking Consensus for Nonlinear Heterogeneous Multi-Agent Systems via an Event-Triggered Mechanism

1
College of Computer and Control Engineering, Qiqihar University, Qiqihar 161000, China
2
Heilongjiang Key Laboratory of Big Data Network Security Detection and Analysis, Qiqihar University, Qiqihar 161000, China
3
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
4
Computer and Big Data College, Heilongjiang University, Harbin 161006, China
*
Authors to whom correspondence should be addressed.
Actuators 2025, 14(12), 574; https://doi.org/10.3390/act14120574
Submission received: 19 September 2025 / Revised: 26 October 2025 / Accepted: 21 November 2025 / Published: 26 November 2025
(This article belongs to the Special Issue Analysis and Design of Linear/Nonlinear Control System—2nd Edition)

Abstract

This paper investigates the problem of practical prescribed-time trajectory tracking consensus for higher-order heterogeneous multi-agent systems (MASs) with unknown time-varying control gains and non-vanishing uncertainties, while also taking external disturbances into account. A directed graph is employed to characterize the communication topology. By introducing a time-varying scaling function and adopting an adaptive backstepping design, a distributed controller is developed that achieves practical prescribed-time trajectory tracking consensus of heterogeneous MASs, while avoiding unbounded control gains and preserving the continuity of control inputs. Furthermore, to optimize the use of communication resources, an event-triggered mechanism is incorporated to reduce the update frequency of controllers. Finally, numerical simulations are carried out to verify the effectiveness of the proposed method.

1. Introduction

In practical applications, various types of agents with different functions, models, and behaviors are often involved [1,2]. A multi-agent system composed of agents with different types, dynamic models, or functionalities is referred to as a heterogeneous multi-agent system. Compared to homogeneous MASs, heterogeneous MASs can accommodate a wider range of tasks. Moreover, since different types of agents can complement each other, the system can maintain functionality even if some agents fail, as others can continue performing the tasks. Due to their superior flexibility, efficiency, robustness, and ability to handle complex tasks, the consensus tracking problem in heterogeneous MASs has increasingly become a research focus.
As a crucial metric for evaluating system performance, convergence speed is also a core indicator in the consensus tracking control problem. Initially, researchers proposed asymptotic consensus tracking [3] and finite-time consensus tracking [4,5]. Compared to asymptotic consensus tracking algorithms, finite-time consensus tracking algorithms offer faster convergence speed, strong robustness, and better disturbance rejection, ensuring that the multi-agent system achieves consensus tracking within a finite time. However, the convergence time of finite-time consensus tracking algorithms depends on initial conditions, which are often unknown in practical engineering applications, making it difficult to accurately estimate the convergence time. To address this limitation, researchers began studying fixed-time consensus tracking algorithms [6,7]. Unlike finite-time consensus tracking, fixed-time consensus tracking ensures a settling time that does not depend on the initial state, making it more suitable for practical applications. However, the settling time of fixed-time consensus algorithms is influenced by controller parameters and the network topology of the system, leading to increased computational complexity and challenges in accurately estimating the convergence time. Additionally, when the initial state differences are large, the initial control input of the fixed-time controller tends to have a significant magnitude, posing challenges for practical engineering applications. To overcome these issues, researchers have further explored prescribed-time consensus tracking algorithms [8,9,10]. The key advantage of this approach is that the system’s convergence time can be predetermined based on the control task, without relying on initial conditions or system design parameters. In [8], a new distributed prescribed-time consensus tracking control approach was proposed for heterogeneous nonlinear MASs subject to deception attacks and actuator failures. By designing an adaptive prescribed-time controller, the prescribed-time tracking control problem of heterogeneous nonlinear MASs was transformed into a tracking error constraint problem, thereby achieving consensus tracking. In [9], it researched the prescribed-time consensus tracking problem for heterogeneous MASs with static leaders and matched disturbances on directed graphs. In [10], it provided a summary of latest trends and methods in prescribed-time control for MASs and introduced several representative time-varying function controllers capable of ensuring prescribed-time consensus.
However, certain prescribed-time consensus controllers are restricted to a specific time interval [11] or become zero after the prescribed time [12]. In both cases, the system remains in an uncontrolled state, making the consensus protocol highly susceptible to disruption even under minor external disturbances. Additionally, prescribed-time consensus algorithms may exhibit infinite gain at the prescribed time, particularly in noisy environments, which can lead to instability or impracticality in real-world engineering applications. To address these issues, the practical prescribed-time consensus tracking algorithm has been proposed [13,14]. Compared to conventional prescribed-time consensus tracking algorithms, this approach eliminates the infinite gain problem, ensuring that the control input stays within bounds throughout the process, making it more suitable for practical engineering applications.
However, the aforementioned research results require continuous controller updates, leading to unnecessary computational and communication overhead, which wastes computational resources and communication bandwidth. To decrease the rate of controller updates, the reference [15] introduced the concept of event-triggered control. Unlike traditional time-periodic sampling methods, event-triggered algorithms update the controller based on a designed event mechanism, activating only when necessary. To balance convergence speed while minimizing resource consumption, researchers have developed various event-triggered finite-time consensus tracking strategies [16,17,18], fixed-time consensus tracking strategies [19,20,21], and prescribed-time consensus tracking strategies [22,23,24]. In [22], it proposed a distributed prescribed-time control method for nonlinear MASs, leveraging neural networks to estimate unknown nonlinearities and introducing a dynamic memory-based event-triggered mechanism to reduce controller execution frequency. In [23], it investigated prescribed-time leader-follower tracking control for second-order MASs under an event-triggered framework. In [24], it studied fully distributed prescribed-time leader-follower output consensus for heterogeneous MASs with a dynamic event-triggered mechanism.
The aforementioned studies have been instrumental in advancing MASs. Inspired by these works, this paper introduces a practical event-triggered prescribed-time trajectory tracking consensus algorithm for high-order nonlinear heterogeneous MASs subjected to external disturbances. The primary advantages of this algorithm are as listed below: (1) For higher-order nonlinear heterogeneous MASs subject to external disturbances and unknown time-varying control gains, a distributed controller is designed under directed communication graphs by employing a time-varying scaling function and the adaptive backstepping method, which guarantees the achievement of practical prescribed-time consensus tracking. Unlike existing schemes where unbounded control gains may arise at the prescribed time, the proposed controller successfully avoids the issue of infinite control gains during the operation of MASs. (2) An event-triggered mechanism is introduced to reduce communication resource consumption. Compared to event-triggered finite-time consensus tracking algorithms [16,17,18] and fixed-time consensus tracking algorithms [19,20,21], the presented practical prescribed-time trajectory tracking consensus algorithm, incorporating an event-triggered mechanism, ensures a convergence time that remains unaffected by the system’s initial state, topology, or control parameters, enabling users to specify it arbitrarily. (3) Unlike the approaches in [8,24], the control scheme proposed in this paper ensures that the system operates continuously throughout the entire time interval. Moreover, it does not require prior knowledge of the lower or upper bounds of the control gains and can achieve trajectory tracking consensus for heterogeneous MASs within the prescribed time, regardless of the initial conditions.
Notation 1. 
R N × N represents the set of all N × N real-valued matrices, while R denotes the set of real numbers. L represents the set of all bounded functions defined on R + (the non-negative real numbers). Z + represents the set of positive integers.

2. Problem Formulation and Preliminary

2.1. Signed Graph Theory

A heterogeneous multi-agent system comprising N followers is considered. The system’s communication topology is denoted by the directed graph G = ( V , E , A ) , where V = ( v 1 , v 2 , , v N ) denotes the set of vertices representing the N followers. E = { v i , v j } V × V represents the set of directed edges. A = [ a i , j ] R N × N is the adjacency matrix of the directed graph G , where a i , i 0 . If ( v i , v i ) E , then a i , j > 0 ; otherwise, a i , j = 0 . The existence of a directed edge ( v i , v j ) E signifies that agent i can obtain information from agent j, but the reverse is not necessarily true. Define the in-degree matrix as D = diag ( d 1 , d 2 , , d N ) , where d i = j N i a i , j , for i = 1 , , N . Consequently, the Laplacian matrix of the directed graph G is given by L = D A = p i , j R N × N where p i , j = j = 1 , j i a i , j when i = j , and when i j , p i , j = a i , j . A directed graph G is considered strongly connected if a directed path exists between every pair of followers. The reference trajectory information for the N followers is generated by a leader, indexed as agent 0, and is directly accessible to certain followers. Define the signed augment graph as G ¯ = ( V ¯ , E ¯ ) , where V ¯ = ( v 0 , v 1 , v 2 , , v N ) and E ¯ V ¯ × V ¯ . If the leader only transmits information without receiving information from followers, then for follower i, b i = 1 , if it can directly access the leader’s information, and b i = 0 otherwise. Let B = diag ( b 1 , b 2 , , b N ) denote the leader adjacency matrix for the followers.

2.2. Problem Formulation

A heterogeneous multi-agent system with N followers is considered, where the dynamics of the ith follower are given by the following equation   
x ˙ i , o ( t ) = g i , o ( x ¯ i , o , t ) x i , o + 1 ( t ) + f i , o ( x ¯ i , o , t ) + d i , o ( t ) x ˙ i , n ( t ) = g i , n ( x ¯ i , n , t ) u i ( t ) + f i , n ( x ¯ i , n , t ) + d i , n ( t ) y i = x i , 1
where x i , o ( t ) R represents the system state of the ith agent; x ¯ i , o = [ x i , 1 , , x i , o ] T ( i = 1 , , N ; o = 1 , , n ) represents the state vector of the ith follower; u i ( t ) R denotes the control input of the system; y i R represents the system output; f i , o ( x ¯ i , o , t ) R ( o = 1 , , n ) represents the aggregated uncertainty, which is an unknown, continuously differentiable, and non-vanishing function. g i , o ( x ¯ i , o , t ) R ( o = 1 , , n ) denotes the unknown time-varying control gain; and d i , o ( t ) ( o = 1 , , n ) is an unknown time-varying external disturbance.
Assumption 1. 
The reference trajectory d r ( t ) R and its cth ( c = 1 , , n 1 ) order derivatives are given, piecewise continuous, and remain bounded for t [ t 0 , + ) .
Assumption 2. 
For the mismatching and non-vanishing uncertainty f i , o ( x ¯ i , o , t ) ( o = 1 , , n ) , there exists a known scalar function ( i , o ) ( x ¯ i , o ) and an unknown constant i , o > 0 ( o = 1 , , n ) , which is continuously differentiable with respect to x ¯ i , o , such that
f i , o x ¯ i , o , t i , o i , o x ¯ i , o
Moreover, for any x ¯ i , o , the function ( i , o ) ( x ¯ i , o ) is either bounded or is bounded if and only if x ¯ i , o is bounded. Additionally, this function is independent of the system design parameters.
Assumption 3. 
g i , o ( x ¯ i , o , t ) is bounded and positive definite. Therefore, there exist unknown constants g ̲ i , o and g ¯ i , o such that 0 < g ̲ i , o | g i , o ( x ¯ i , o , t ) | g ¯ i , o < + for all t. Additionally, it is assumed that the control direction remains consistent (we presume that sgn ( g i , o ( x ¯ i , o , t ) ) = 1 ).
Assumption 4. 
The unknown time-varying external disturbance d i , o ( t ) ( o = 1 , , n ) is bounded and satisfies | d i , o ( t ) | < D i , o ( o = 1 , , n ) , where D i , o is an unknown constant.
Assumption 5 
([25]). The extended graph G ¯ includes a spanning tree with the leader node 0 as its root.
Assumption 6. 
The directed graph representing the followers is strongly connected, with at least one follower having direct access to the accurate information of the desired trajectory.
The objective of this paper is to design a control scheme to complete practical prescribed-time consensus tracking of a desired trajectory for a nonlinear heterogeneous multi-agent system with non-vanishing uncertainties, unknown time-varying control gains, and external disturbances. Specifically, under Assumptions 1–6, the tracking error e i ( t ) = y i ( t ) d r ( t ) is ensured to converge to zero within the prescribed time and remain at y i ( t ) = d r ( t ) thereafter. Furthermore, an event-triggered mechanism is incorporated to reduce communication resource consumption and enhance system efficiency.

2.3. The Preliminary

To facilitate the subsequent analysis, several fundamental lemmas and definitions are introduced in this section.
Definition 1 
([10]). For system (1), the practical prescribed-time reference trajectory tracking consensus is said to be achievable if there exists a control input u i such that, for any initial state x i , 1 ( 0 ) and for all i , j V , the following conditions are satisfied   
lim t T p x i , 1 ( t ) d r ( t ) ι ; x i , 1 ( t ) d r ( t ) ι , t > T p ; lim t + x i , 1 ( t ) d r ( t ) = 0
where T p is a user-specified constant independent of initial conditions and system parameters, and ι is a normal number that can be tuned to achieve the desired level.
Lemma 1 
([25]). Under the conditions of Assumptions 5 and 6, L + B is non-singular.
Lemma 2 
([26]). The hyperbolic tangent function tanh ( ) has the following properties:
0 | ξ | ξ tanh ξ τ 0.2785 τ ξ tanh ξ τ 0
where τ > 0 , ξ R .
Define a time-varying scaling function ϱ ( t ) over the global time interval as follows:
ϱ ( t ) = T p T p + t 0 t l , t [ t 0 , T p + t 0 ν ) T p ν l , t [ T p + t 0 ν , + )
where t 0 represents the initial time, l max { 2 , n } is a user-specified real number, T p is a predetermined finite time set by the designer based on physical considerations, and ν is a small constant that satisfies 0 < ν < T p . The function ϱ ( t ) is monotonically increasing and continuous for t [ t 0 , + ) .
Lemma 3 
([27]). Let ϖ ( t ) be an unknown but bounded function, whose norm over the interval [ t 0 , t ] is defined as ϖ [ t 0 , t ] = sup ϵ [ t 0 , t ] | ϖ ( ϵ ) | , and let V ( t ) : [ t 0 , + ) R + { 0 } be a C 1 function.
(1) For t [ t 0 , T p + t 0 ν ) , if
V ˙ ( t ) k ϱ V ( t ) 2 l T p ϱ 1 l V ( t ) + ϖ ( t ) ϱ
then
V ( t ) κ 1 ( t ) ϱ ( t ) 2 V ( t 0 ) + ϖ [ t 0 , t ] k ϱ ( t ) 2
where k is a finite normal number, and κ 1 ( t ) = exp ( k T p l 1 ( ( T p T p + t 0 t ) l 1 1 ) ) is a monotonically decreasing function.
(2) For t [ T p + t 0 ν , + ) , if
V ˙ ( t ) k ¯ ϱ V ( t ) 2 l T p ϱ 1 l V ( t ) + ϖ ( t ) ϱ
then
V ( t ) κ 2 ( t ) κ 1 ( T p + t 0 ν ) ϱ 2 V ( t 0 ) + κ 2 ( t ) ϖ [ t 0 , T p + t 0 ν ) k ϱ 2 + ϖ ( t ) [ T p + t 0 ν , t ] k ¯ ϱ 2
where k ¯ is a finite normal number, and κ 2 ( t ) = exp ( k ¯ ( T p ν ) l ( t T p t 0 + ν ) ) is a monotonically decreasing function.
Lemma 4 
([27]). Let ϑ ( t ) be an unknown but bounded function, whose norm over the interval [ t 0 , t ] is defined as ϑ [ t 0 , t ] = sup ϵ [ t 0 , t ] | ϑ ( ϵ ) | , if a C 1 function η ( t ) satisfies
η ˙ ( t ) k ϱ ( t ) η ( t ) + ϱ ( t ) ϑ ( t )
where k > 0 , then
(1) For t [ t 0 , T p + t 0 ν ) , we have
η ( t ) exp k T p l 1 1 ϱ ( t ) 1 1 l η ( t 0 ) + ϑ [ t 0 , t ] k
(2) For t [ T p + t 0 ν , + ) , we have
η ( t ) exp ( k T p l 1 ( 1 ϱ ( T p + t 0 ν ) 1 1 l ) ) η ( t 0 ) + ϑ t 0 , T p + t 0 ν k + ϑ [ T p + t 0 ν , t ] k ϱ 2
Lemma 5 
(Young’s Inequality [28]). For any vectors γ 1 , γ 2 R m , the following inequality holds:
ζ 1 ζ 2 ϱ ζ ζ 1 2 2 + ζ 2 2 2 ϱ ζ
where ϱ is given in Equation (5), and ζ > 0 is a user-defined constant.
Lemma 6 
([29]). Take into account the following first-order differential equation:
η ˙ 1 ( t ) = a η 1 ( t ) + b η 2 ( t )
where a > 0 , b > 0 , η 2 ( t ) is a non-negative function. In this case, as long as η 1 ( t 0 ) 0 , η 1 ( t ) remains non-negative for all t.

3. Controller Design and Stability Analysis

3.1. Event-Triggered Controller Design

First, to achieve efficient control with reduced communication burden, the event-triggered control strategy is proposed as follows:
Δ i ( t ) = ( 1 + h ) [ ( k i , n + ζ i , n ð ^ i , n ϖ i , n 2 + r i , n ϑ i , n 2 + r j , n ϑ j , n 2 ) ϱ z i , n + c i , n l 2 T p 2 ϱ 1 + 2 l z i , n 3 + ϱ z i , n 1 2 z i , n 2 1 ϱ β i , n 1 tanh ( z i , n β i , n 1 τ ) ] ( 1 + h ) z i , n ϱ 2 ( 1 h ) 2 u i ( t ) = Δ i ( t k ) , t [ t k , t k + 1 ) t k + 1 = inf { t R | E i ( t ) h u i ( t ) + h 1 }
where Δ i ( t ) is a computable parameter, E i ( t ) = Δ i ( t ) u i ( t ) represents the measurement error, k ð i , n > 0 , c i , n 1 2 , k ð i , n = k ¯ ð i , n + 2 l T p ( T p ν ) 1 l , k ¯ ð i , n > 0 , r i , n > 0 , ζ i , n > 0 , 0 < h < 1 , and β i , n 1 = m = 1 n α i , n 1 ð ^ i , m ð ^ ˙ i , m . Additionally, ϑ j , n = m = 1 n α i , n 1 x j , m x j , m + 1 2 + m = 1 n α i , n 1 x j , m 2 , ϖ i , n = m = 1 n 1 α i , n 1 x i , m i , m 2 + m = 1 n α i , n 1 x j , m j , m 2 + i , n 2 , ϑ i , n = m = 1 n 1 α i , n 1 x i , m x i , m + 1 2 + m = 1 n 1 α i , n 1 x i , m 2 + m = 0 n 1 α i , n 1 d r ( m ) d r ( m + 1 ) 2 + α i , n 1 ϱ l T p ϱ ( 1 + 1 l ) 2 + 1 . Furthermore, z i , n and ð ^ i , m represent the error variable and estimated parameters, respectively.
Remark 1. 
When the measurement error E i ( t ) remains within the dynamic threshold, the controller maintains its value from the last triggering instant t k . Otherwise, the control signal u i ( t ) is updated at t k + 1 . Under the influence of the control action, E i ( t ) eventually returns to the threshold range, and this process repeats cyclically.
Remark 2. 
As the event-triggering threshold varies with the control signal, h primarily regulates the triggering threshold of E i ( t ) , influencing the update frequency of the controller. Meanwhile, the parameter h 1 is mainly used to adjust the triggering interval and prevent Zeno behavior. Proper selection of h and h 1 is crucial for balancing control performance and communication resource utilization.
Secondly, the following coordinate transformation is introduced as follows
z i , 1 = j = 1 N a i , j ( x i , 1 x j , 1 ) + b i ( x i , 1 d r ) z i , q = x i , q α i , q 1
where q = 2 , 3 , , n , α i , q 1 ( t ) R represents the virtual controller, z i , 1 ( t ) R denotes the consensus error of the ith agent, and z i , q ( t ) R represents the error variance of the ith agent. Define T = T p + t 0 ν , and the analysis is separated into two main parts: t [ t 0 , T ) and t [ T , + ) .
Part   1 :   t [ t 0 , T )
Step   1 :   From (1) and (16), the time derivative of z i , 1 ( t ) can be derived as:
z ˙ i , 1 = j = 1 N a i , j ( x ˙ i , 1 x ˙ j , 1 ) + b i ( x ˙ i , 1 d ˙ r ) = ( j = 1 N a i , j + b i ) ( g i , 1 ( z i , 2 + α i , 1 ) + f i , 1 + d i , 1 ) j = 1 N a i , j ( g j , 1 x j , 2 + f j , 1 + d j , 1 ) b i d ˙ r
According to (16), the time derivative of 1 2 g ̲ i , 1 z i , 1 2 is given by:
1 g ̲ i , 1 z i , 1 z ˙ i , 1 = 1 g ̲ i , 1 z i , 1 ( ( j = 1 N a i , j + b i ) ( g i , 1 ( z i , 2 + α i , 1 ) + f i , 1 + d i , 1 ) j = 1 N a i , j ( g j , 1 x j , 2 + f j , 1 + d j , 1 ) b i d ˙ r )
By Lemma 5, it can be determined that
1 g ̲ i , 1 ( j = 1 N a i , j + b i ) g i , 1 z i , 2 z i , 1 ϱ z i , 1 2 z i , 2 2 2 + g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 ϱ
1 g ̲ i , 1 ( j = 1 N a i , j + b i ) f i , 1 z i , 1 ζ i , 1 ϱ z i , 1 2 i , 1 2 i , 1 2 2 + ( j = 1 N a i , j + b i ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ
1 g ̲ i , 1 ( j = 1 N a i , j + b i ) d i , 1 z i , 1 r i , 1 ϱ z i , 1 2 2 + ( j = 1 N a i , j ) 2 D i , 1 2 2 r i , 1 g ̲ i , 1 2 ϱ
1 g ̲ i , 1 j = 1 N a i , j f j , 1 z i , 1 ζ i , 1 ϱ z i , 1 2 j , 1 2 j , 1 2 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ
1 g ̲ i , 1 j = 1 N a i , j d j , 1 z i , 1 r j , 1 ϱ z i , 1 2 2 + ( j = 1 N a i , j ) 2 D j , 1 2 2 r j , 1 g ̲ i , 1 2 ϱ
1 g ̲ i , 1 z i , 1 d ˙ r b i 1 g ̲ i , 1 b i | z i , 1 | | d ˙ r | r i , 1 ϱ | d ˙ r | 2 z i , 1 2 2 + b i 2 2 r i , 1 g ̲ i , 1 2 ϱ
Substituting (19)–(24) into (18) yields
1 g ̲ i , 1 z i , 1 z ˙ i , 1 1 g ̲ i , 1 ( j = 1 N a i , j + b i ) g i , 1 z i , 1 α i , 1 + ϱ z i , 1 2 z i , 2 2 2 + ð i , 1 ϖ i , 1 ζ i , 1 ϱ z i , 1 2 2 + r i , 1 ϱ z i , 1 2 ϑ i , 1 2 + r j , 1 2 + g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j + b i ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j + b i ) 2 D i , 1 2 2 r i , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j ) 2 D j , 1 2 2 r j , 1 g ̲ i , 1 2 ϱ + b i 2 2 r i , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ j = 1 N a i , j z i , 1 x j , 2
where ð i , 1 = m a x { i , 1 2 , j , 1 2 } , ϖ i , 1 = { i , 1 2 + j , 1 2 } and ϑ i , 1 = 1 + | d r ˙ | 2 are all computable and bounded functions.
For the ith agent, the Lyapunov function for the first step is chosen as follows:
V i , 1 = z i , 1 2 2 g ̲ i , 1 + ð ˜ i , 1 2 2 ϱ 2
where ð ˜ i , 1 = ð i , 1 ð ^ i , 1 represents the estimation error of parameter, and ð ^ i , 1 represents the parameter estimate of ð i , 1 . The function ϱ is given in (5). By combining (25), the time derivative of (26) is obtained as follows:
V ˙ i , 1 = 1 g ̲ i , 1 z i , 1 z ˙ i , 1 + 1 ϱ 2 ð ˜ i , 1 ð ˜ ˙ i , 1 ϱ ˙ ϱ 3 ð ˜ i , 1 2 1 g ̲ i , 1 ( j = 1 N a i , j + b i ) g i , 1 z i , 1 α i , 1 + μ z i , 1 2 z i , 2 2 2 + ð i , 1 ϖ i , 1 ζ i , 1 ϱ z i , 1 2 2 + r j , 1 2 + r i , 1 ϱ z i , 1 2 ϑ i , 1 2 + g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j + b i ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j + b i ) 2 D i , 1 2 2 r i , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j ) 2 D j , 1 2 2 r j , 1 g ̲ i , 1 2 ϱ + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 ϱ + b i 2 2 r i , 1 g ̲ i , 1 2 ϱ j = 1 N a i , j z i , 1 x j , 2 1 ϱ 2 ð ˜ i , 1 ð ^ ˙ i , 1 ϱ ˙ ϱ 3 ð ˜ i , 1 2
Design the virtual controller α i , 1 for the ith agent as follows:
α i , 1 ( x i , 1 , d r , ϱ , ð ^ i , 1 , x j , 1 , x j , 2 ) = 1 ( j = 1 N a i , j + b i ) ( ( k i , 1 + ζ i , 1 ð ^ i , 1 ϖ i , 1 2 + r i , 1 ϑ i , 1 2 + r j , 1 2 ) ϱ z i , 1 j = 1 N a i , j x j , 2 + c i , 1 l 2 T p 2 ϱ 1 + 2 l z i , 1 3 )
The corresponding adaptive law is given by
ð ^ ˙ i , 1 = k ð i , 1 ϱ ð ^ i , 1 + ζ i , 1 ϱ 3 ϖ i , 1 z i , 1 2 2 , ð ^ i , 1 ( t 0 ) > 0
where k i , 1 > 0 , c i , 1 1 2 , k ð i , 1 = k ¯ ð i , 1 + 2 l T p ( T p ν ) 1 l , k ¯ ð i , 1 > 0 . According to Lemma 6, it can be ensured that ð ^ i , 1 0 . Since ð ˜ i , 1 ð ^ i , 1 = ð ˜ i , 1 ( ð i , 1 ð ˜ i , 1 ) = ð ˜ i , 1 2 + ð ˜ i , 1 ð i , 1 , it can be inferred that
1 ϱ 2 ð ˜ i , 1 ð ^ ˙ i , 1 = k ð i , 1 ϱ ϱ 2 ð ˜ i , 1 2 + k ð i , 1 ϱ ϱ 2 ð ˜ i , 1 ð i , 1 ϱ ζ i , 1 ð ˜ i , 1 ϖ i , 1 z i , 1 2 2 k ð i , 1 ϱ ϱ 2 ð ˜ i , 1 2 + k ð i , 1 ϱ ( ð ˜ i , 1 2 + ð i , 1 2 ) 2 ϱ 2 ϱ ζ i , 1 ð ˜ i , 1 ϖ i , 1 z i , 1 2 2 = k ð i , 1 ϱ 2 ϱ 2 ð ˜ i , 1 2 + k ð i , 1 ð i , 1 2 2 ϱ ϱ ζ i , 1 ð ˜ i , 1 ϖ i , 1 z i , 1 2 2
By applying Young’s inequality, i.e., ( ζ 1 2 + ζ 2 2 ) 2 ζ 1 ζ 2 it follows that
c i , 1 l 2 T p 2 g i , 1 g ̲ i , 1 ϱ 1 + 2 l z i , 1 4 c i , 1 l 2 T p 2 ϱ 1 + 2 l z i , 1 4 = c i , 1 ( l 2 T p 2 ϱ 1 + 2 l z i , 1 4 + 1 g ̲ i , 1 2 ϱ ( + c i , 1 g ̲ i , 1 2 ϱ 2 c i , 1 ( l T p ϱ ϱ 1 l z i , 1 2 ) × ( 1 g ̲ i , 1 ϱ ) + c i , 1 g ̲ i , 1 2 ϱ = 4 c i , 1 2 g ̲ i , 1 l T p ϱ 1 l z i , 1 2 + c i , 1 g ̲ i , 1 2 ϱ
By combining (30), (31), and (27), we obtain
V ˙ i , 1 ( t ) k i , 1 ϱ z i , 1 2 4 c i , 1 l T p ϱ 1 l z i , 1 2 2 g ̲ i , 1 k ð i , 1 ϱ ð ˜ i , 1 2 2 ϱ 2 + ϱ z i , 1 2 z i , 2 2 2 2 l T p ϱ 1 l ð ˜ i , 1 2 2 ϱ 2 + Θ i , 1 ϱ
where Θ i , 1 = g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 + ( j = 1 N a i , j + b i ) 2 2 ζ i , 1 g ̲ i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i ) 2 2 r i , 1 g ̲ i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 r i , 1 g ̲ i , 1 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 + b i 2 2 r i , 1 g ̲ i , 1 2 + k ð i , 1 ð i , 1 2 2 + c i , 1 g ̲ i , 1 2 is a finite positive but unknown constant.
Step 2 : Taking the time derivative of z i , 2 = x i , 2 α i , 1 yields
z ˙ i , 2 = x ˙ i , 2 α ˙ i , 1 = g i , 2 ( z i , 3 + α i , 2 ) + f i , 2 + d i , 2 α ˙ i , 1
The Lyapunov function for the second step of the ith follower is constructed as follows:
V i , 2 = V i , 1 + z i , 2 2 2 g ̲ i , 2 + ð ˜ i , 2 2 2 ϱ 2
Taking the derivative of the aforementioned Lyapunov function yields
V ˙ i , 2 = V ˙ i , 1 + 1 g ̲ i , 2 z i , 2 z ˙ i , 2 1 ϱ 2 ð ˜ i , 2 ð ^ ˙ i , 2 ϱ ˙ ϱ 3 ð ˜ i , 2 2 = V ˙ i , 1 + 1 g ̲ i , 2 z i , 2 ( g i , 2 x i , 3 + f i , 2 + d i , 2 α ˙ i , 1 ) 1 ϱ 2 ð ˜ i , 2 ð ^ ˙ i , 2 ϱ ˙ ϱ 3 ð ˜ i , 2 2
According to Lemma 2, it follows that
1 g ̲ i , 2 z i , 2 β i , 1 1 g ̲ i , 2 ϱ z i , 2 β i , 1 g ¯ i , 2 g ̲ i , 2 ϱ ( z i , 2 β i , 1 tanh ( z i , 2 β i , 1 τ ) + 0.2785 τ )
Following a similar analytical procedure as in (19)–(24), we obtain
1 g ̲ i , 2 z i , 2 z ˙ i , 2 1 g ̲ i , 2 g i , 2 z i , 2 α i , 2 + ϱ z i , 2 2 z i , 3 2 2 + ð i , 2 ϖ i , 2 ζ i , 2 ϱ z i , 2 2 2 + r i , 2 ϱ z i , 2 2 ϑ i , 2 2 + r j , 2 ϱ z i , 2 2 ϑ j , 2 2 + g ¯ i , 2 2 2 g ̲ i , 2 2 ϱ + 4 2 ζ i , 2 g ̲ i , 2 2 ϱ + D i , 1 2 + D i , 2 2 + g ¯ i , 1 2 + 3 2 r i , 2 g ̲ i , 2 2 ϱ + D j , 1 2 + D j , 2 2 + g ¯ j , 1 2 + g ¯ j , 2 2 2 r j , 2 g ̲ i , 2 2 ϱ + g ¯ i , 2 g ̲ i , 2 ϱ ( z i , 2 β i , 1 tanh ( z i , 2 β i , 1 τ ) + 0.2785 τ )
where β i , 1 = α i , 1 ð ^ i , 1 ð ^ ˙ i , 1 , ð i , 2 = max { i , 1 2 , j , 1 2 , i , 2 2 , j , 2 2 } , ϑ i , 2 = 1 + ( α i , 1 x i , 1 x i , 2 ) 2 + ( α i , 1 x i , 1 ) 2 + ( α i , 1 d r d ˙ r ) 2 + ( α i , 1 d ˙ r d ¨ r ) 2 + ( α i , 1 ϱ ϱ ˙ ) 2 ϑ j , 2 = ( α i , 1 x j , 2 x j , 3 ) 2 + ( α i , 1 x j , 2 ) 2 + ( α i , 1 x j , 1 x j , 2 ) 2 + ( α i , 1 x j , 1 ) 2 , ϖ i , 2 = ( α i , 1 x i , 1 i , 1 ) 2 + ( α i , 1 x j , 1 j , 1 ) 2 + i , 2 2 + ( α i , 1 x j , 2 i , 2 ) 2 . These variables are computable and derived using Lemma 5 and the derivative of α i , 1 .
The virtual controller α i , 2 is designed as follows:
α i , 2 = ( k i , 2 + ζ i , 2 ð ^ i , 2 ϖ i , 2 2 + r i , 2 ϑ i , 2 2 + r j , 2 ϑ j , 2 2 ) ϱ z i , 2 c i , 2 l 2 T p 2 ϱ 1 + 2 l z i , 2 3 ϱ z i , 1 2 z i , 2 2 1 ϱ β i , 1 tanh ( z i , 2 β i , 1 τ )
The corresponding adaptive law is designed as follows:
ð ^ ˙ i , 2 = k ð i , 2 ϱ ð ^ i , 2 + ζ i , 2 ϱ 3 ϖ i , 2 z i , 2 2 2 , ð ^ i , 2 ( t 0 ) > 0
where ð ^ i , 2 is the estimation of ð i , 2 , k i , 2 > 0 , c i , 2 1 2 , k ð i , 2 = k ¯ ð i , 2 + 2 l T p ( T p ν ) 1 l , k ¯ ð i , 2 > 0 .
By combining Equations (35)–(39), we obtain
V ˙ i , 2 ( t ) k i , 1 ϱ z i , 1 2 4 c i , 1 l T p ϱ 1 l z i , 1 2 2 g ̲ i , 1 k ð i , 1 ϱ ð ˜ i , 1 2 2 ϱ 2 2 l T p ϱ 1 l ð ˜ i , 1 2 2 ϱ 2 k i , 2 ϱ z i , 2 2 4 c i , 2 l T p ϱ 1 l z i , 2 2 2 g ̲ i , 2 k ð i , 2 ϱ ð ˜ i , 2 2 2 ϱ 2 2 l T p ϱ 1 l ð ˜ i , 2 2 2 ϱ 2 + ϱ z i , 2 2 z i , 3 2 2 + Θ i , 2 ϱ
Step q ( q = 3 , , n 1 ) : Select the Lyapunov function for the qth step as follows:
V i , q = V i , q 1 + z i , q 2 2 g ̲ i , q + ð ˜ i , q 2 2 ϱ 2
where ð ˜ i , q = ð i , q ð ^ i , q represents the parameter estimation error, and ð i , q = max 1 m q { i , m 2 , j , m 2 } .
The virtual controller α i , q ( q = 3 , , n 1 ) is recursively obtained as follows:
α i , q = ( k i , q + ζ i , q ð ^ i , q ϖ i , q 2 + r i , q ϑ i , q 2 + r j , q ϑ j , q 2 ) ϱ z i , q c i , q l 2 T p 2 ϱ 1 + 2 l z i , q 3 ϱ z i , q 1 2 z i , q 2 1 ϱ β i , q 1 tanh ( z i , q β i , q 1 τ )
The corresponding adaptive law is given by
ð ^ ˙ i , q = k ð i , q ϱ ð ^ i , q + ζ i , q ϱ 3 ϖ i , q z i , q 2 2 , ð ^ i , q ( t 0 ) > 0
where k i , q > 0 , c i , q 1 2 , k ð i , q = k ¯ ð i , q + 2 l T p ( T p ν ) 1 l , k ¯ ð i , q > 0 , ϑ i , q = 1 + m = 1 q 1 ( α i , q 1 x i , m x i , m + 1 ) 2 + m = 1 q 1 ( α i , q 1 x i , m ) 2 + m = 0 q 1 ( α i , q 1 d r ( m ) d r ( m + 1 ) ) 2 + ( α i , q 1 ϱ l T p ϱ ( 1 + 1 l ) ) 2 , β i , q 1 = m = 1 q α i , q 1 ð ^ i , m ð ^ ˙ i , m , ϑ j , q = m = 1 q ( α i , q 1 x j , m x j , m + 1 ) 2 + m = 1 q ( α i , q 1 x j , m ) 2 , ϖ i , q = m = 1 q 1 ( α i , q 1 x i , m i , m ) 2 + m = 1 q ( α i , q 1 x j , m j , m ) 2 + i , q 2 .
By incorporating (42)–(43) into the time derivative of (41), we obtain
V ˙ i , q ( t ) m = 1 q ( 2 k i , m g ̲ i , m ) ϱ z i , m 2 2 g ̲ i , m m = 1 q 4 c i , m l T p ϱ 1 l z i , m 2 2 g ̲ i , m m = 1 q k ð i , m ϱ ð ˜ i , m 2 2 ϱ 2 2 l T p ϱ 1 l m = 1 q ð ˜ i , m 2 2 ϱ 2 + ϱ z i , q 2 z i , q + 1 2 2 + Θ i , q ϱ
where Θ i , q = g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 + ( j = 1 N a i , j + b i ) 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i ) 2 2 r i , 1 2 g ̲ i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 r j , 1 2 g ̲ i , 1 2 + b i 2 2 r i , 1 2 g ̲ i , 1 2 + m = 1 q c i , m g ̲ i , m 2 + m = 2 q 2 m 2 ζ i , m g ̲ i , m 2 + m = 2 q g ¯ i , m 2 2 g ̲ i , m 2 + m = 1 q k ð i , m ð i , m 2 2 + m = 2 q m = 1 q ( D j , m + g ¯ j , m ) 2 2 r j , m g ̲ i , m 2 + m = 2 q g ¯ i , m g ̲ i , m 0.2758 τ + m = 2 q m = 1 q D i , m 2 + m = 1 q 1 g ¯ i , m 2 + m + 1 2 r i , m g ̲ i , m 2 .
Step n : Select the Lyapunov function for the nth step as follows:
V i , n = V i , n 1 + z i , n 2 2 g ̲ i , n + ð ˜ i , n 2 2 ϱ 2
where ð ˜ i , n = ð i , n ð ^ i , n represents the parameter estimation error.
Differentiating the Lyapunov function above gives
V ˙ i , n = V ˙ i , n 1 + 1 g ̲ i , n z i , n z ˙ i , n 1 ϱ 2 ð ˜ i , n ð ^ ˙ i , n ϱ ˙ ϱ 3 ð ˜ i , n 2 = V ˙ i , n 1 + 1 g ̲ i , n z i , n ( g i , n u i + f i , n + d i , n α ˙ i , n 1 ) 1 ϱ 2 ð ˜ i , n ð ^ ˙ i , n ϱ ˙ ϱ 3 ð ˜ i , n 2
The adaptive law is formulated as follows:
ð ^ ˙ i , n = k ð i , n ϱ ð ^ i , n + ζ i , n ϱ 3 ϖ i , n z i , n 2 2 , ð ^ i , n ( t 0 ) > 0
where ð i , n = max 1 m n { i , m 2 , j , m 2 } , k ð i , n = k ¯ ð i , n + 2 l T p ( T p ν ) 1 l , k ¯ ð i , n > 0 .
According to the previously defined event-triggered control mechanism, we have
Δ i ( t ) = ( 1 + K 1 ( t ) h ) u i ( t ) + K 2 ( t ) h 1
where K 1 ( t ) < 1 , K 2 ( t ) < 1 are time-varying parameters.
By substituting (48) into (46), we obtain
V ˙ i , n ( t ) m = 1 n 1 ( 2 k i , m g ̲ i , m ) ϱ z i , m 2 2 g ̲ i , m m = 1 n 1 4 c i , m l T p ϱ 1 l z i , m 2 2 g ̲ i , m m = 1 n 1 k ð i , m ϱ ð ˜ i , m 2 2 ϱ 2 2 l T p ϱ 1 l m = 1 n 1 ð ˜ i , m 2 2 ϱ 2 + ϱ z i , n 1 2 z i , n 2 2 + Θ i , n 1 ϱ + g i , n g ̲ i , n z i , n Δ i ( t ) 1 + h K 1 ( t ) g i , n g ̲ i , n z i , n h K 2 ( t ) 1 + h K 1 ( t ) + 1 g ̲ i , n z i , n f i , n + 1 g ̲ i , n z i , n d i , n 1 g ̲ i , n z i , n α ˙ i , n 1 1 ϱ 2 ð ˜ i , n ð ^ ˙ i , n ϱ ˙ ϱ 3 ð ˜ i , n 2
According to the definition in (15), we have
g i , n g ̲ i , n z i , n h 1 K 2 ( t ) 1 + h K 1 ( t ) 1 g ̲ i , n g i , n z i , n h 1 1 h z i , n 2 ϱ 2 ( 1 h ) 2 + h 1 ( t ) 2 2 ϱ
g i , n g ̲ i , n z i , n Δ i ( t ) 1 + h K 1 ( t ) g i , n g ̲ i , n z i , n Δ i ( t ) 1 + h ( k i , n + ζ i , n ð ^ i , n ϖ i , n 2 + r i , n ϑ i , n 2 + r j , n ϑ j , n 2 ) ϱ z i , n 2 c i , n l 2 T p 2 ϱ 1 + 2 l z i , n 4 ϱ z i , n 1 2 z i , n 2 2 1 ϱ z i , n β i , n 1 tanh ( z i , n β i , n 1 τ ) z i , n 2 ϱ 2 ( 1 h ) 2
Substituting (50)–(51) into (49) yields
V ˙ i , n ( t ) m = 1 n ( 2 k i , m g ̲ i , m ) ϱ z i , m 2 2 g ̲ i , m m = 1 n 4 c i , m l T p ϱ 1 l z i , m 2 2 g ̲ i , m m = 1 n k ð i , m ϱ ð ˜ i , m 2 2 ϱ 2 2 l T p ϱ 1 l m = 1 n ð ˜ i , m 2 2 ϱ 2 + Θ i , n ϱ
where Θ i , n = g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 + ( j = 1 N a i , j + b i ) 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i ) 2 2 r i , 1 2 g ̲ i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 r j , 1 2 g ̲ i , 1 2 + b i 2 2 r i , 1 2 g ̲ i , 1 2 + m = 1 n c i , m g ̲ i , m 2 + m = 2 n 2 m 2 ζ i , m g ̲ i , m 2 + m = 2 n g ¯ i , m 2 2 g ̲ i , m 2 + m = 2 n m = 1 n ( D j , m + g ¯ j , m ) 2 2 r j , m g ̲ i , m 2 + m = 1 n k ð i , m ð i , m 2 2 + m = 2 n g ¯ i , m g ̲ i , m 0.2758 τ + m = 2 n m = 1 n D i , m 2 + m = 1 n 1 g ¯ i , m 2 + m + 1 2 r i , m g ̲ i , m 2
Part 2 : t [ T , )
The design steps are similar to those in the first part. Therefore, only the expressions for the virtual/actual controllers and the adaptive laws are provided here.
In the first step, the virtual controller α i , 1 maintains consistency with (28). It is important to note that in the standard backstepping approach, repeated differentiation of the virtual controller is involved. However, in the second part, the function ϱ is defined as a constant according to (5), meaning its derivative is zero. This characteristic influences the design of the virtual control strategy. To compensate for the discontinuity in the control input at t = T , the computable control action smoothing unit is defined as follows
δ i , q ( T ) = ( ζ i , q ( ð ^ i , q ϖ ¯ i , q ð ^ i , q ϖ i , q ) 2 ϱ z i , q + r i , q ( ϑ ¯ i , q ϑ i , q ) 2 ϱ z i , q + r j , q ( ϑ ¯ j , q ϑ j , q ) 2 ϱ z i , q ) | t = T
The designed virtual controller is given by
α i , q = ( k i , m + ζ i , q ð ^ i , q ϖ i , q 2 + r i , q ϑ i , q 2 + r j , q ϑ j , q 2 ) ϱ z i , q c i , q l 2 T p 2 ϱ 1 + 2 l z i , q 3 ϱ z i , q 1 2 z i , q 2 1 ϱ β i , q 1 tanh ( z i , q β i , q 1 τ ) + δ i , q ( T )
where q = 2 , , n 1 .
The designed actual controller is given by
Δ i ( t ) = ( 1 + h ) [ ( k i , n + ζ i , n ð ^ i , n ϖ i , n 2 + r i , n ϑ i , n 2 + r j , n ϑ j , n 2 ) ϱ z i , n + c i , n l 2 T p 2 ϱ 1 + 2 l z i , n 3 + ϱ z i , n 1 2 z i , n 2 1 ϱ β i , n 1 tanh ( z i , n β i , n 1 τ ) δ i , n ( T ) ] ( 1 + h ) z i , n ϱ 2 ( 1 h ) 2
where ϖ ¯ i , n = m = 1 n 1 ( α i , q 1 x i , m i , m ) 2 + m = 1 n ( α i , n 1 x j , m j , m ) 2 + i , n 2 , ϑ ¯ i , n = 1 + m = 1 n 1 ( α i , n 1 x i , m x i , m + 1 ) 2 + m = 1 n 1 ( α i , n 1 x i , m ) 2 + m = 0 n 1 ( α i , n 1 d r ( m ) d r ( m + 1 ) ) 2 , ϑ ¯ j , n = m = 1 n ( α i , n 1 x j , m x j , m + 1 ) 2 + m = 1 n ( α i , n 1 x j , m ) 2 .
The corresponding adaptive law is given by
ð ^ ˙ i , q = k ð i , q ϱ ð ^ i , q + ζ i , q ϱ 3 ϖ i , q z i , q 2 2 , ð ^ i , q ( t 0 ) > 0
where q = 2 , , n , ð i , q = max 1 m q { i , m 2 , j , m 2 } , k ð i , q = k ¯ ð i , q + 2 l T p ( T p ν ) 1 l , k ¯ ð i , q > 0 .
Finally, the entire control algorithm design process is presented in Algorithm 1.
Algorithm 1: Control algorithm of the agent i
  • t [ t 0 , T )
  • step 1.1: Design virtual controller α i , 1 (28) and adaptive law ð ^ ˙ i , 1 (29).
  • step 1.2: Obtain the virtual controller α i , 2 (38) by recursion and design adaptive law ð ^ ˙ i , 2 (29).
  • step 1 . q   ( q = 3 , , n 1 ) : Obtain the virtual controller α i , q ( q = 3 , , n 1 ) (38) through
  • step-by-step recursion and design
  • adaptive law ð ^ ˙ i , q (29).
  • step 1 . n : Introduce the event-triggered mechanism, obtain the actual controller u i and design adaptive law ð ^ ˙ i , n (47).
  • t [ T , )
  • step 2.1: The same as in step 1.1.
  • step 2 . q   ( q = 3 , , n ) : Introduce a softening unit δ i , q (53), update the virtual controller α i , q (54),
  • actual controller u i (55) and adaptive law ð ^ ˙ i , q (56).

3.2. Theoretical Analysis

After designing the controller, theoretical analysis is conducted to demonstrate its stability and convergence properties.
Theorem 1. 
Consider the heterogeneous multi-agent system described in (1). If Assumptions 1–6 hold, then under the virtual controllers α i , q , the actual controllers u i , and corresponding adaptive laws ð ^ i , q , the following objectives can be achieved
(1) Within the prescribed time, the tracking error of the heterogeneous multi-agent system converges to a neighborhood of the origin, achieving practical prescribed-time trajectory tracking consensus. More importantly, for all t [ t 0 , T ) , the following holds
z i , m 1 ϱ B i , m
where B i , m κ 1 ( i = 1 N m = 1 n ( z i , m 2 ( t 0 ) + ð ˜ i , m 2 ( t 0 ) g ̲ i , m ) + Θ k ;
(2) After the prescribed time, the tracking error of the system remains within a neighborhood of the origin. As time approaches infinity, the tracking error converges to zero. Specifically, for t [ T , + ) , the following holds
z i , j 1 ϱ B i , j
where B i , j κ 1 ( T ) κ 2 ( t ) ( i = 1 N j = 1 n ( z i , j 2 ( t 0 ) + ð ˜ i , j 2 ( t 0 ) g ̲ i , j ) + 2 g ̲ i , m κ 2 ( t ) Θ k + 2 g ̲ i , m Θ ¯ k ¯ ;
(3) All internal signals are continuous and bounded;
(4) t > 0 , k z + , such that all execution intervals { t k + 1 t k } are lower bounded by t . Thus, Zeno behavior can be prevented.
Proof of Theorem 1. 
Part   1 :   t [ t 0 , T )
Case 1 : Between two event-triggered time instants t k and t k + 1 , i.e., t [ t k , t k + 1 ) , analyze the stability of the system and demonstrate that the output signal y i ( t ) can follow the target trajectory within the prescribed time.
For the ith agent, select the total Lyapunov function as follows
V i , n = m = 1 n z i , m 2 2 g ̲ i , m + m = 1 n ð ˜ i , m 2 2 ϱ 2
Differentiating Equation (59) gives
V ˙ i , n m = 1 n ( 2 k i , m g ̲ i , m ) ϱ z i , m 2 2 g ̲ i , m m = 1 n 4 c i , m l T p ϱ 1 l z i , m 2 2 g ̲ i , m m = 1 n k ð i , m ϱ ð ˜ i , m 2 2 ϱ 2 2 l T p ϱ 1 l m = 1 n ð ˜ i , m 2 2 ϱ 2 + Θ i , n ϱ k ϱ V i , n 2 l T p ϱ 1 l V i , n + Θ i , n ϱ
where k = min 1 m n { 2 k i , m g ̲ i , m , k ð i , m } , k i , m > 0 , c i , m 1 2 , k ð i , m = k ¯ ð i , m + 2 l T p ( T p ν ) 1 l , k ¯ ð i , m > 0 , Θ i , n = g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g ̲ i , 1 2 + ( j = 1 N a i , j + b i ) 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g ̲ i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i ) 2 2 r i , 1 2 g ̲ i , 1 2 + b i 2 2 r i , 1 2 g ̲ i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 r j , 1 2 g ̲ i , 1 2 + m = 2 n m = 1 n D i , m 2 + m = 1 n 1 g ¯ i , m 2 + m + 1 2 r i , m g ̲ i , m 2 + m = 1 n c i , m g ̲ i , m 2 + m = 2 n 2 m 2 ζ i , m g ̲ i , m 2 + m = 2 n g ¯ i , m 2 2 g ̲ i , m 2 + m = 1 n k ð i , m ð i , m 2 2 + h 1 2 2 + m = 2 n g ¯ i , m g ̲ i , m 0.2758 τ + m = 2 n m = 1 n ( D j , m + g ¯ j , m ) 2 2 r j , m g ̲ i , m 2
According to Lemma 3, we have
V i , n ( t ) κ 1 ϱ ( t ) 2 V i , n ( t 0 ) + Θ i , n k ϱ ( t ) 2 = κ 1 ϱ ( t ) 2 ( m = 1 n z i , m 2 ( t 0 ) 2 g ̲ i , m + m = 1 n ð ˜ i , m 2 ( t 0 ) 2 ) + Θ i , n k ϱ ( t ) 2
For the entire closed-loop heterogeneous multi-agent system, select the total Lyapunov function as follows:
V = i = 1 N V i , n
Differentiating (62) results in
V ˙ = i = 1 N V ˙ i , n k ϱ V 2 l T p ϱ 1 l V + Θ ϱ
By applying Lemma 3, we obtain
V ( t ) κ 1 ϱ ( t ) 2 V i , n ( t 0 ) + Θ i , n k ϱ ( t ) 2 = κ 1 ϱ ( t ) 2 ( i = 1 N m = 1 n z i , m 2 ( t 0 ) 2 g ̲ i , m + i = 1 N m = 1 n ð ˜ i , m 2 ( t 0 ) 2 ) + Θ k ϱ ( t ) 2
where Θ = i = 1 N Θ i , n .
According to (59), it can be obtained that
z i , m 2 g ̲ i , m V ( t ) ( i = 1 , , N ; m = 1 , , n )
and
ð ˜ i , m 2 ϱ 2 V ( t ) ( i = 1 , , N ; m = 1 , , n )
By solving Equations (64)–(66) simultaneously, we obtain (46) and we can infer that B i , m Θ k + κ 1 ( i = 1 N m = 1 n ( z i , m 2 ( t 0 ) + ð ˜ i , m 2 ( t 0 ) g ̲ i , m ) , which indicates that z i , m and ð ˜ i , m are bounded.
Based on Assumptions 1 and 2, it can be inferred that ϖ i , 1 = i , 1 2 + j , 1 2 L and ϑ i , 1 = 1 + | d r ˙ | 2 L . Since ϱ remains bounded over the entire time interval [ t 0 , + ) , the term ϱ z i , m in Equation (57) is also bounded over [ t 0 , + ) . Furthermore, given that d r is bounded, x i , 1 remains bounded as well. According to Lemma 4, it can be ensured that ð ^ i , m L , indicating that ð i , m is bounded.
Since 0 ϱ 2 l 2 1 ( l max { n , 2 } ) , together with Equation (57) and the condition 0 < κ 1 ( t ) 1 , it follows that
α i , 1 1 ( j = 1 N a i , j + b i ) ( ( k i , 1 + ζ i , 1 ð ^ i , 1 ϖ i , 1 2 + r i , 1 ϑ i , 1 2 + r j , 1 2 ) ϱ z i , 1 + j = 1 N a i , j x j , 2 + c i , 1 l 2 T p 2 ϱ 2 l 2 ( ϱ z i , 1 ) 3 ) 1 ( j = 1 N a i , j + b i ) ( ( k i , 1 + ζ i , 1 ð ^ i , 1 ϖ i , 1 2 + r i , 1 ϑ i , 1 2 + r j , 1 2 ) B i , 1 + j = 1 N a i , j x j , 2 + c i , 1 l 2 T p 2 B i , 1 3 ) L
It is worth noting that in the first part of Section 3.1, the terms ϖ i , q , ϑ i , q , and ϑ j , q ( q = 2 , , n ) , which are related to ϱ , may be relatively large but are bounded and computable. In the controller design, adjustable parameters ζ i , q , r i , q , and r j , q are introduced to ensure computational feasibility. According to Lemma 4, it can be concluded that ð ^ i , m is bounded. Furthermore, based on Lemma 2, the term 1 ϱ β i , q 1 tanh z i , q β i , q 1 τ is also bounded. Additionally, since 0 ϱ 2 1 , it follows that
α i , q ( k i , q + ζ i , q ð ^ i , q ϖ i , q 2 + r i , q ϑ i , q 2 + r j , q ϑ j , q 2 ) ϱ z i , q + c i , q l 2 T p 2 ϱ 2 l 2 ( ϱ z i , q ) 3 + z i , q 1 2 2 ϱ z i , q 1 ϱ β i , q 1 tanh ( z i , q β i , q 1 τ ) ( k i , q + ζ i , q ð ^ i , q ϖ i , q 2 + r i , q ϑ i , q 2 + r j , q ϑ j , q 2 + 1 2 B i , m 1 ) B i , m + c i , q l 2 T p 2 ϱ 2 l 2 B i , m 3
Δ i ( 1 + h ) [ ( k i , n + ζ i , n ð ^ i , n ϖ i , n 2 + r i , n ϑ i , n 2 + r j , n ϑ j , n 2 + 1 2 B i , n 1 ) B i , n + c i , q l 2 T p 2 ϱ 2 l 2 B i , n 3 + ( 1 + h ) B i , n 2 ( 1 h ) 2 L
From Equations (68) and (69), it can be observed that α i , q and Δ i are both bounded.
Case 2 : Proving system stability at t = t k + 1 .
According to the event-triggered control strategy and event-triggered mechanism, we have u i = Δ i ( t k + 1 ) .
V ˙ i , n = V ˙ i , n 1 + 1 g ̲ i , n g i , n z i , n Δ i ( t k + 1 ) + 1 g ̲ i , n z i , n f i , n + 1 g ̲ i , n z i , n d i , n 1 g ̲ i , n z i , n α ˙ i , n 1 1 ϱ 2 ð ˜ i , n ð ^ ˙ i , n ϱ ˙ ϱ 3 ð ˜ i , n 2 m = 1 n ( 2 k i , m g ̲ i , m ) ϱ z i , m 2 2 g ̲ i , m m = 1 n 4 c i , m l T p ϱ 1 l z i , m 2 2 g ̲ i , m m = 1 n k ð i , m ϱ ð ˜ i , m 2 2 ϱ 2 2 l T p ϱ 1 l m = 1 n ð ˜ i , m 2 2 ϱ 2 + Θ i , n ϱ
where Θ i , n = g i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g i , 1 2 + ( j = 1 N a i , j + b i , j ) 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i , j ) 2 2 τ i , 1 2 g i , 1 2 + b i 2 2 τ i , 1 2 g i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 τ j , 1 2 g i , 1 2 + m = 1 n ( c i , m g i , m 2 + k i , m B i , m 2 2 ) + m = 2 n ( 2 m 2 ζ i , m g i , m 2 + g ¯ i , m 2 2 g i , m 2 + m = 1 n D i , m 2 + m = 1 n 1 g ¯ i , m 2 + m + 1 2 r i , m g i , m 2 + m = 1 n D j , m + g ¯ j , m 2 2 r j , m g i , m 2 + g ¯ i , g ̲ i , 0.2758 τ ) . Through further analysis, similar to Case 1, it can be concluded that the result remains valid at time t k + 1 as well. The analysis beyond the interval [ t k , t k + 1 ] will follow the same process as in Case 1 and Case 2.
Part   2 :   t [ T , )
Proof that the system’s target trajectory tracking consensus can be maintained beyond the prescribed time.
First, we prove that the control strategy is continuous at t = T . According to δ i , q ( T ) defined in (53), we obtain
α i , q ( T ) = k i , q ϱ ( T ) z i , q ( T ) c i , q l 2 T p 2 ϱ ( T ) 1 + 2 l z i , q ( T ) 3 r i , q ϑ i , q ( T ) 2 ϱ ( T ) z i , q ( T ) ζ i , q ð ^ i , q ( T ) ϖ i , q ( T ) 2 ϱ ( T ) z i , q ( T ) r j , q ϑ j , q ( T ) 2 ϱ ( T ) z i , q ( T ) ϱ ( T ) z i , q 1 ( T ) 2 z i , q ( T ) 2 1 ϱ ( T ) β i , q 1 ( T ) tanh ( z i , q ( T ) β i , q 1 ( T ) τ ) = lim t T α i , q ( t )
Therefore, it can be concluded that α i , q ( q = 2 , , n 1 ) is continuous at t = T . Similarly, following the analysis in (71), it can also be shown that Δ i is continuous at t = T . Moreover, it can be inferred that δ i , q ( T ) is a finite constant. Next, we analyze the convergence of the system over the interval [ T , + ) .
Compared to the first part of Section 3.1, it is only necessary to handle the term g i , m g ̲ i , m z i , m δ i , m ( T ) . By applying Young’s inequality, we obtain g i , m g ̲ i , m z i , m δ i , m ( T ) λ i , m ϱ z i , m 2 2 + g ¯ i , m 2 δ i , m 2 2 λ i , m ϱ g ̲ i , m 2 , where λ i , m ( m = 2 , , n ) are arbitrarily given constants. Thus, we can derive that
V ˙ k ¯ ϱ V 2 l T p ϱ 1 l V + Θ ¯ ϱ
where k ¯ = min 1 m n { ( 2 k i , m λ i , m ) g ̲ i , m , k ð i , m } , k i , m > λ i , m > 0 , c i , m 1 2 , Θ ¯ i , n = b i 2 2 r i , 1 2 g i , 1 2 + g ¯ i , 1 2 ( j = 1 N a i , j + b i ) 2 2 g i , 1 2 + ( j = 1 N a i , j + b i ) 2 + ( j = 1 N a i , j ) 2 2 ζ i , 1 g i , 1 2 + D i , 1 2 ( j = 1 N a i , j + b i ) 2 2 r i , 1 2 g i , 1 2 + D j , 1 2 ( j = 1 N a i , j ) 2 2 r j , 1 2 g i , 1 2 + m = 1 n ( c i , m g i , m 2 + k δ i , m δ i , m 2 2 ) + m = 2 n ( 2 m 2 ζ i , m g i , m 2 + g ¯ i , m 2 2 g i , m 2 + g ¯ i , m 2 δ i , m 2 2 λ i , m g ̲ i , m 2 + m = 1 n ( D j , m + g ¯ j , m ) 2 2 r j , m g i , m 2 + m = 1 n D i , m 2 + m = 1 n 1 g ¯ i , m 2 + m + 1 2 r i , m g i , m 2 + g ¯ i , m g i , m 0.2758 τ ) , Θ ¯ = i = 1 N Θ ¯ i , n
According to Lemma 3, we can infer that
V ( t ) κ 2 ( t ) κ 1 ( T ) ϱ 2 V ( t 0 ) + κ 2 ( t ) Θ k ϱ 2 + Θ ¯ k ¯ ϱ 2
From this, it can be inferred that (58) holds and we can obtain that B i , j 2 g ̲ i , m κ 2 ( t ) Θ k + 2 g ̲ i , m Θ ¯ k ¯ + κ 1 ( T ) κ 2 ( t ) ( i = 1 N j = 1 n ( z i , j 2 ( t 0 ) + ð ˜ i , j 2 ( t 0 ) g ̲ i , j ) . Thus, for any initial conditions, it follows that z i , q L and ð ^ i , q L . In particular, as t + and ν 0 , we have z i , q 0 .
For the event-triggered control strategy, E i ( t ) = Δ i ( t ) u i ( t ) . There t > 0 such that for all k Z + , the inter-event time satisfies { t k + 1 t k } t .
Thus, we obtain
d | E i | d t = d d t ( E i × E i ) 1 2 = sign ( E i ) E i ˙ | Δ ˙ i ( t ) |
where Δ ˙ i is a continuous function that applies to the nth-order function f i , o ( x ¯ i , o , t ) ( o = 1 , , n ) and the nth -order continuous function d r . Considering that Δ ˙ i includes the bounded signals z i , q and ð ^ i , q ( q = 1 , , n ) , we have | Δ ˙ i | δ , where δ > 0 is a constant. Thus, we can conclude that t max { h , h 1 } δ , which confirms that Zeno behavior is successfully avoided. □
Remark 3. 
Unlike [23], this paper explores the practical prescribed-time tracking control problem for high-order heterogeneous MASs affected by external disturbances and uncertainties, utilizing an event-triggered mechanism. The designed control strategy not only conserves communication resources but also better aligns with practical applications.
Remark 4. 
Unlike [11], which defines control only within a specific time interval, and [12], which sets the control to zero beyond the prescribed time, this paper proposes an event-triggered practical prescribed-time trajectory tracking consensus control scheme for heterogeneous MASs that remains effective over the entire time horizon.

4. Illustrative Results

To demonstrate the effectiveness of the proposed approach, a numerical simulation case is provided in this section, and all simulations are implemented in MATLAB R2025b.
Consider a heterogeneous multi-agent system composed of five agents, with the dynamics of an individual follower described as follows:
x ˙ i , 1 ( t ) = g i , 1 ( x ¯ i , 1 , t ) x i , 2 ( t ) + f i , 1 ( x ¯ i , 1 , t ) + d i , 1 ( t ) x ˙ i , 2 ( t ) = g i , 2 ( x ¯ i , 2 , t ) u i ( t ) + f i , 2 ( x ¯ i , 2 , t ) + d i , 2 ( t ) y i = x i , 1
where i = 1 , 2 , 3 , 4 , 5 , f i , 1 ( x ¯ i , 1 ) = 0.1 x i , 1 2 + 0.1 cos ( 0.5 x i , 1 ) , f i , 2 ( x ¯ i , 2 ) = 0.1 x i , 1 x i , 2 + 0.05 sin ( x i , 1 x i , 2 ) + x i , 1 e | x i , 2 | , d i , 1 = 0.1 sin t , d i , 2 = 0.1 sin t , g i , 1 = 2 + 0.5 i sin ( x i , 1 ) , g i , 2 = 3 + 0.2 i cos ( x i , 1 x i , 2 ) , i , 1 = 1 + x i , 1 2 , i , 2 = 1 + | x i , 1 x i , 2 | + | x i , 1 | . Choose the target trajectory as y r = 0.2 sin ( 2 t ) . The system parameters and initial conditions are shown in Table 1. The system’s communication topology is illustrated in Figure 1.
Choosing the different prescribed time are T p = 1.8 s, T = T p ν = 1 s; T p = 2.8 s, T = T p ν = 2 s, respectively. Show the corresponding simulation results in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7.
Figure 2a,b illustrate that under the proposed event-triggered mechanism, the five agents are able to track the desired reference trajectory within different preset times and maintain consensus after the preset time is reached. Figure 2c presents a comparison with the method in [14]. In [14], the effect of external disturbances was not considered. As shown in Figure 2c, when external disturbances are present, the agents fail to accurately track the desired trajectory within the preset time, and a persistent tracking error exists between the agents and the reference trajectory. This comparison clearly demonstrates that the control strategy proposed in this paper can effectively handle the presence of external disturbances. Figure 3 depicts the tracking errors between the agents and the desired trajectory. From Figure 3a,b, it can be observed that the agents achieve trajectory-tracking consensus under different preset time conditions, and the tracking errors remain small. However, as shown in Figure 3b, when disturbances are present, the tracking error at the preset time is relatively large, indicating that the agents cannot accurately track the desired trajectory within the preset time compared with the disturbance-free case. Figure 4 illustrates the evolution of the adaptive laws of the virtual controllers, while Figure 5 shows those of the actual controllers. It can be observed from Figure 4 and Figure 5 that the parameter estimates ð ^ i , 1 and ð ^ i , 2 remain bounded throughout the operation. Figure 6 presents the control inputs of the system. Figure 6a,b correspond to the control inputs obtained with the proposed event-triggered mechanism, whereas Figure 6c shows the control inputs of the method in [14] without event-triggering. As shown in Figure 6a,b, after introducing the event-triggered mechanism, the number of control updates is significantly reduced compared with Figure 6c, indicating that continuous updates are no longer required during the entire operation. Figure 7 shows the event-triggering rates of the five agents under the event-triggered mechanism. As can be seen, the triggering rate remains low, demonstrating that the proposed event-triggered scheme can effectively save communication resources.

5. Conclusions

This paper explores the practical prescribed-time trajectory tracking consensus problem for nonlinear heterogeneous MASs under a directed graph, accounting for unknown time-varying gains, non-vanishing uncertainties, and external disturbances. First, a time-varying function is employed, and an adaptive backstepping approach is utilized to design a practical prescribed-time trajectory tracking consensus controller. Meanwhile, an event-triggered mechanism is introduced to decrease the frequency of controller updates. Next, we use the Lyapunov method to prove the stability of the system, and a Zeno phenomenon analysis is performed to confirm that the proposed event-triggered control strategy effectively prevents Zeno behavior. Finally, numerical simulations experiments verify the effectiveness of the proposed scheme. Future research will further explore fully distributed event-triggered control strategies with relaxed assumptions on communication topology. Moreover, motivated by [30,31], efforts will be devoted to resilient cooperative control for dynamic stochastic tasks and partial objective failures. By leveraging event-triggered mechanisms to coordinate online task redistribution and prescribed-time tracking control, the goal is to guarantee consensus of heterogeneous systems within stringent time limits.

Author Contributions

Conceptualization, Y.L. and H.C.; methodology, H.C.; validation, H.C. and Y.H.; writing—original draft preparation, H.C.; writing—review and editing, Y.L., D.Y. and Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62473278), the Scientific Research Project of Heilongjiang Provincial Universities, China (Grant No. 145409322).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MASsMulti-agent systems

References

  1. Bai, X.; Yan, W.; Ge, S.S. Efficient task assignment for multiple vehicles with partially unreachable target locations. IEEE Internet Things J. 2020, 8, 3730–3742. [Google Scholar] [CrossRef]
  2. Bai, X.; Cao, M.; Yan, W.; Ge, S.S.; Zhang, X. Efficient heuristic algorithms for single-vehicle task planning with precedence constraints. IEEE Trans. Cybern. 2020, 51, 6274–6283. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, W.; Wen, C.; Huang, J. Distributed adaptive asymptotically consensus tracking control of nonlinear multi-agent systems with unknown parameters and uncertain disturbances. Automatica 2017, 77, 133–142. [Google Scholar] [CrossRef]
  4. Wang, X.; Li, S.; Yu, X.; Yang, J. Distributed active anti-disturbance consensus for leader-follower higher-order multi-agent systems with mismatched disturbances. IEEE Trans. Autom. Control 2016, 62, 5795–5801. [Google Scholar] [CrossRef]
  5. Cai, Y.; Zhang, H.; Liu, Y.; He, Q. Distributed bipartite finite-time event-triggered output consensus for heterogeneous linear multi-agent systems under directed signed communication topology. Appl. Math. Comput. 2020, 378, 125162. [Google Scholar] [CrossRef]
  6. Du, H.; Wen, G.; Wu, D.; Cheng, Y.; Lü, J. Distributed fixed-time consensus for nonlinear heterogeneous multi-agent systems. Automatica 2020, 113, 108797. [Google Scholar] [CrossRef]
  7. Liu, Y.; Chi, R.; Li, H.; Wang, L.; Lin, N. HiTL-based adaptive fuzzy tracking control of MASs: A distributed fixed-time strategy. Sci. China Technol. Sci. 2023, 66, 2907–2916. [Google Scholar] [CrossRef]
  8. Gao, Y.; Zhou, W.; Niu, B.; Kao, Y.; Wang, H.; Sun, N. Distributed prescribed-time consensus tracking for heterogeneous nonlinear multi-agent systems under deception attacks and actuator faults. IEEE Trans. Autom. Sci. Eng. 2023, 21, 6920–6929. [Google Scholar] [CrossRef]
  9. Ke, J.; Zeng, J.; Duan, Z. Observer-based prescribed-time consensus control for heterogeneous multi-agent systems under directed graphs. Int. J. Robust Nonlinear Control 2023, 33, 872–898. [Google Scholar] [CrossRef]
  10. Ning, B.; Han, Q.-L.; Zuo, Z.; Ding, L.; Lu, Q.; Ge, X. Fixed-time and prescribed-time consensus control of multiagent systems and its applications: A survey of recent trends and methodologies. IEEE Trans. Ind. Inform. 2022, 19, 1121–1135. [Google Scholar] [CrossRef]
  11. Yong, C.; Guangming, X.; Huiyang, L. Reaching consensus at a preset time: Single-integrator dynamics case. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 6220–6225. [Google Scholar]
  12. Pal, A.K.; Kamal, S.; Yu, X.; Nagar, S.K.; Xiong, X. Free-will arbitrary time consensus for multiagent systems. IEEE Trans. Cybern. 2020, 52, 4636–4646. [Google Scholar] [CrossRef] [PubMed]
  13. Ning, B.; Han, Q.-L.; Zuo, Z. Practical fixed-time consensus for integrator-type multi-agent systems: A time base generator approach. Automatica 2019, 105, 406–414. [Google Scholar] [CrossRef]
  14. Li, Y.; Cai, H.; Zhu, L.; Huang, Y.; Zhang, Z.; Guo, Y. Practical Prescribed-Time Consensus Tracking Control for Nonlinear Heterogeneous MASs with Bounded Time-Varying Gain under Mismatching and Non-Vanishing Uncertainties. IEEE Access 2025, 13, 28557–28573. [Google Scholar] [CrossRef]
  15. Dimarogonas, D.V.; Frazzoli, E.; Johansson, K.H. Distributed event-triggered control for multi-agent systems. IEEE Trans. Autom. Control 2011, 57, 1291–1297. [Google Scholar] [CrossRef]
  16. Ji, L.; Lv, D.; Yang, S.; Guo, X.; Li, H. Finite time consensus control for nonlinear heterogeneous multi-agent systems with disturbances. Nonlinear Dyn. 2022, 108, 2323–2336. [Google Scholar] [CrossRef]
  17. Zhou, H.; Sui, S.; Tong, S. Fuzzy adaptive finite-time consensus control for high-order nonlinear multiagent systems based on event-triggered. IEEE Trans. Fuzzy Syst. 2022, 30, 4891–4904. [Google Scholar] [CrossRef]
  18. Yao, Y.; Luo, Y.; Cao, J. Finite-time guarantee-cost H∞ consensus control of second-order multi-agent systems based on sampled-data event-triggered mechanisms. Neural Netw. 2024, 174, 106261. [Google Scholar] [CrossRef]
  19. Ni, J.; Duan, F.; Shi, P. Fixed-time consensus tracking of multiagent system under DOS attack with event-triggered mechanism. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 5286–5299. [Google Scholar] [CrossRef]
  20. Liu, J.; Zhang, Y.; Sun, C.; Yu, Y. Fixed-time consensus of multi-agent systems with input delay and uncertain disturbances via event-triggered control. Inf. Sci. 2019, 480, 261–272. [Google Scholar] [CrossRef]
  21. Ni, J.; Shi, P.; Zhao, Y.; Pan, Q.; Wang, S. Fixed-time event-triggered output consensus tracking of high-order multiagent systems under directed interaction graphs. IEEE Trans. Cybern. 2020, 52, 6391–6405. [Google Scholar] [CrossRef]
  22. Zheng, X.; Ma, H.; Zhou, Q.; Li, H. Neural-based prescribed-time consensus control for multiagent systems via dynamic memory event-triggered mechanism. Sci. China Technol. Sci. 2025, 68, 1320402. [Google Scholar] [CrossRef]
  23. Wang, S.; Wang, Y. Prescribed-time leader-following control of second-order multi-agent systems under event-triggered mechanism. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 4205–4210. [Google Scholar]
  24. Li, H.; Jia, X.; Chi, X.; Li, B. Fully Distributed Prescribed-Time Leader-Following Output Consensus of Heterogeneous Multi-Agent Systems With Dynamic Event-Triggered Mechanism. IEEE Trans. Autom. Sci. Eng. 2024, 22, 8341–8350. [Google Scholar] [CrossRef]
  25. Zhang, H.; Lewis, F.L. Adaptive cooperative tracking control of higher-order nonlinear systems with unknown dynamics. Automatica 2012, 48, 1432–1439. [Google Scholar] [CrossRef]
  26. Wang, J.; Chen, W.; Ma, K.; Liu, Z.; Philip Chen, C.L. Adaptive neural event-triggered control for nonlinear uncertain system with input constraint based on auxiliary system. Int. J. Robust Nonlinear Control 2021, 31, 7528–7545. [Google Scholar] [CrossRef]
  27. Luo, D.; Wang, Y.; Song, Y. Practical prescribed time tracking control with bounded time-varying gain under non-vanishing uncertainties. IEEE/CAA J. Autom. Sin. 2024, 11, 219–230. [Google Scholar] [CrossRef]
  28. Dong, G.; Li, H.; Ma, H.; Lu, R. Finite-time consensus tracking neural network FTC of multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 653–662. [Google Scholar] [CrossRef]
  29. Yuan, X.; Chen, B.; Lin, C. Prescribed finite-time adaptive neural tracking control for nonlinear state-constrained systems: Barrier function approach. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7513–7522. [Google Scholar] [CrossRef]
  30. Bai, X.; Cao, M.; Yan, W. Event-and time-triggered dynamic task assignments for multiple vehicles. Auton. Robot. 2020, 44, 877–888. [Google Scholar] [CrossRef]
  31. Naeem, H.M.Y.; Bhatti, A.I.; Butt, Y.A.; Ahmed, Q.; Bai, X. Energy efficient solution for connected electric vehicle and battery health management using eco-driving under uncertain environmental conditions. IEEE Trans. Intell. Veh. 2024, 9, 4621–4631. [Google Scholar] [CrossRef]
Figure 1. The signed network topology.
Figure 1. The signed network topology.
Actuators 14 00574 g001
Figure 2. The trajectory tracking consensus of heterogeneous MASs: (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Figure 2. The trajectory tracking consensus of heterogeneous MASs: (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Actuators 14 00574 g002
Figure 3. Tracking error e i ( t ) : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Figure 3. Tracking error e i ( t ) : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Actuators 14 00574 g003
Figure 4. The adaptive law ð ^ i , 1 : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Figure 4. The adaptive law ð ^ i , 1 : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Actuators 14 00574 g004
Figure 5. The adaptive law ð ^ i , 2 : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Figure 5. The adaptive law ð ^ i , 2 : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Actuators 14 00574 g005
Figure 6. The control input u i : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Figure 6. The control input u i : (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s. (c) Compared with [14], prescribed time T = 1 s.
Actuators 14 00574 g006
Figure 7. The triggering rate: (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s.
Figure 7. The triggering rate: (a) Prescribed time T = 2 s. (b) Prescribed time T = 1 s.
Actuators 14 00574 g007
Table 1. Parameter values.
Table 1. Parameter values.
x 1 ( 0 ) = [ 0.1 , 0.2 , 0 , 0.1 , 0.2 ] T k ¯ ð ^ i , 1 = 0.5 ζ i , 1 = 0.5 r i , 2 = 0.001
x 2 ( 0 ) = [ 0.5 , 0 , 0.5 , 0.2 , 0.5 ] T c i , 1 = 1 ζ i , 2 = 0.5 h = 0.5
ð ^ i , 1 ( 0 ) = [ 0.1 , 0.2 , 0.3 , 0.4 , 0.5 ] T c i , 2 = 0.5 r i , 1 = 0.0001 h 1 = 1
ð ^ i , 2 ( 0 ) = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T k i , 1 = 3 r i , 2 = 0.001 τ = 0.9
k ¯ ð ^ i , 1 = 0.5 k i , 2 = 1 r j , 1 = 0.0001 ν = 0.8
l = 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, H.; Li, Y.; Yang, D.; Huang, Y.; Guo, Y. Practical Prescribed-Time Trajectory Tracking Consensus for Nonlinear Heterogeneous Multi-Agent Systems via an Event-Triggered Mechanism. Actuators 2025, 14, 574. https://doi.org/10.3390/act14120574

AMA Style

Cai H, Li Y, Yang D, Huang Y, Guo Y. Practical Prescribed-Time Trajectory Tracking Consensus for Nonlinear Heterogeneous Multi-Agent Systems via an Event-Triggered Mechanism. Actuators. 2025; 14(12):574. https://doi.org/10.3390/act14120574

Chicago/Turabian Style

Cai, Hui, Yandong Li, Dan Yang, Yuyi Huang, and Yuan Guo. 2025. "Practical Prescribed-Time Trajectory Tracking Consensus for Nonlinear Heterogeneous Multi-Agent Systems via an Event-Triggered Mechanism" Actuators 14, no. 12: 574. https://doi.org/10.3390/act14120574

APA Style

Cai, H., Li, Y., Yang, D., Huang, Y., & Guo, Y. (2025). Practical Prescribed-Time Trajectory Tracking Consensus for Nonlinear Heterogeneous Multi-Agent Systems via an Event-Triggered Mechanism. Actuators, 14(12), 574. https://doi.org/10.3390/act14120574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop