Next Article in Journal
Two Types of Proximal Connected-Image Contractions in b-Metric Spaces and Applications to Fractional Differential Models
Previous Article in Journal
Autonomous Real-Time Regional Risk Monitoring for Unmanned Swarm Systems
Previous Article in Special Issue
Fuzzy Model Predictive Tracking Control for Boiler-Turbine Systems with Disturbances and Input Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Triggered Adaptive Control for Multi-Agent Systems Utilizing Historical Information

1
School of Electrical and Electronic Information, Xihua University, Chengdu 610039, China
2
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(2), 261; https://doi.org/10.3390/math14020261
Submission received: 21 November 2025 / Revised: 5 January 2026 / Accepted: 6 January 2026 / Published: 9 January 2026
(This article belongs to the Special Issue Analysis and Applications of Control Systems Theory)

Abstract

In this study, an adaptive event-driven coordination paradigm is proposed for achieving consensus in nonlinear multi-agent systems (MASs) over directed networks. First, a newly dynamic event-triggered mechanism with single-point historical information is introduced to minimize unnecessary network communication. And a more general form of an event triggering mechanism with moving window historical information is designed for further saving network resources. Considering that the use of historical information over a long period of time may cause deviations, an event-triggered mechanism that can adjust the maximum memory length is proposed in this work to minimize unnecessary network communication. Secondly, the unknown nonlinearities in the MAS model are addressed using the universal approximation capability of neural networks. Then, a methodology for distributed adaptive control under event-triggered mechanisms is introduced leveraging the memory-based command-filtered backstepping methodology, and the proposed scheme resolves the complexity explosion problem. Finally, a case study is conducted to validate the feasibility of the proposed method.

1. Introduction

Distributed consensus control has been a central theme in MAS research for the past ten years [1,2,3]. A pivotal area within this field is consensus tracking, often realized through a leader–follower framework. The importance of this problem is driven by its critical applications across various advanced domains, including cooperative UAVs, spacecraft formation, distributed networks of sensors, and robotic swarming [4,5,6,7,8,9]. In comparison with single-system trajectory tracking [10], the primary challenge of consensus tracking control lies in the design of a local-information-based distributed controller, achieving consensus tracking of the desired trajectory (i.e., the leader). Furthermore, a notable feature of the works discussed above is their consideration of relatively simple system models, often confined to linear systems. However, numerous control systems in practical engineering involve unknown nonlinear dynamics, which presents even greater challenges for controller design. To handle fully unknown nonlinear dynamics within these systems, neural networks (NNs) have been introduced and successfully applied [11,12,13,14].
Importantly, the control strategies established in prior work undergo periodic updates at fixed time intervals, resulting in excessive update frequencies and substantial communication costs. In many MAS applications, each agent has finite capacity in both its energy reserve and communication capabilities. Thus, the creation of adaptive control mechanisms is imperative to economize network resources and augment the efficiency of control systems. Recently, event-triggered control (ETC) has emerged as an effective means to tackle this issue by updating the control input only when an event condition is satisfied [15]. Under such a control scheme, control tasks are initiated solely when the predefined triggering criterion is satisfied. Recent research has increasingly focused on leveraging advanced event-triggered sampling for consensus in MASs [16,17,18,19,20]. For instance, utilizing a state-shifting transformation to convert constrained states into unconstrained variables within a predefined time, ref. [21] introduced an adaptive event-triggered consensus protocol featuring a relative threshold for nonlinear pure-feedback MASs subject to delayed full-state constraints. Moreover, existing ETC approaches have not fully utilized historical information (memory) to enhance resource efficiency. In fact, long-term memory in the human brain can guide future behavior. Inspired by this idea, a memory-based dynamic triggering mechanism was incorporated into the ETC design for a class of nonlinear systems in [22], yet the potential of leveraging long-term memory to guide the triggering mechanism remains unexplored. Therefore, designing a novel memory-based ETC scheme for MASs without increasing system complexity remains a challenging problem that warrants further investigation.
Building on the above analysis, an event-triggered control scheme is presented, utilizing a memory-based command filter approach and a historical information-based dynamic triggering mechanism for enhanced resource conservation. The key novelties of our method are outlined below.
  • For multi-agent systems, this paper proposes a novel distributed adaptive control scheme integrating command filters and historical information. This scheme not only avoids the explosion of complexity but also ensures the boundedness of closed-loop signals. Compared with existing studies, the proposed scheme indirectly incorporates historical information into the controller design process, providing a new approach for memory to guide the behavior of followers.
  • Distinct from the ETC scheme presented in [22,23], this paper draws inspiration from the human brain’s ability to make decisions by leveraging memory and proposes a single-point historical triggering mechanism, which capitalizes on memory at a specific moment, and a moving window-based historical triggering mechanism, which incorporates memory over a time interval, where the maximum memory length is constrained by a dynamic parameter, to mitigate potential deviations induced by long-term memory. The proposed framework provides an alternative approach to adaptively designing ETC through historical information.

2. Problem Formulation

This work investigates the problem of coordinated tracking for nonlinear MASs driven by a leader’s trajectory y r and N followers (denoted as i = 1 , , N ), where the ith follower’s motion is governed by
(1a) x ˙ i , j = x i , j + 1 + ϕ i , j ( x ¯ i , j ) , j = 1 , , n 1 (1b) x ˙ i , n = u i + ϕ i , n ( x ¯ i , n ) (1c) y i = x i , 1
with the state vector denoted as x ¯ i , n = [ x i , 1 , , x i , n ] T and system input and output denoted as u i R and y i , respectively. The function ϕ i , j ( x ¯ i , j ) is continuous and uncertain. The communication among followers is formulated as a weighted oriented graph G = ( F , E ) , represented by a graph with node set F = 1 , 2 , , N and edge set E F × F . In this graph, an edge ( j , i ) E signifies a directed information flow from agent j to agent i. Accordingly, the neighbor set of agent i is given by N i = { j ( j , i ) E } . The weighted adjacency matrix B = [ c i j ] R N × N is defined by the following rule: c i j > 0 if ( j , i ) E and c i j = 0 otherwise. From B, the in-degree matrix R = diag { r 1 , , r N } is computed as r i = j = 1 N c i j , which ultimately gives the Laplacian matrix L = R B .
The leader’s trajectory y r satisfies Assumption 1. Additionally, the communication from the leader to the followers is encoded by a diagonally structured matrix W = diag { w 1 , , w N } , where w i > 0 if and only if agent i has direct access to the leader’s information.
Assumption 1
([24]). The desired trajectories y r and y ˙ r are continuous, bounded, and available.
Assumption 2
([25]). The directed communication graph G forms a directed spanning tree; i.e., the leader node is reachable from all follower nodes.
The primary objective of this work is to devise an event-triggered distributed controller for the consensus tracking of followers y i to the leader y r , with the dual objectives of ensuring the exclusion of Zeno behavior and that all closed-loop signals are bounded and of reducing the control update frequency to conserve resources. For the rigor of the content, some necessary lemmas are given below.
Lemma 1
([26]). For any i R + and Ξ, it holds that
| Ξ | Ξ 2 Ξ 2 + i 2 i
Lemma 2
([27]). Assume that V ¯ ( t ) is absolutely continuous and V ¯ ( t ) 0 for t 0 . The following fact is established.
d V ¯ ( t ) d t V ¯ + γ + · sup t h τ t V ¯
with the continuous bounded functions γ 0 , 0 , and 0 . Then for every t t 0 ,
V ¯ ( t ) γ + Π e κ ¯ ( t t 0 )
where γ = sup h t γ , Π = sup τ h | ψ | 2 , τ [ h , 0 ] , and κ ¯ = inf t h { κ : κ + + e κ h = 0 } .
Lemma 3
([28]). The domain of continuous function ϕ is a compact set, which can be reconstructed as
ϕ i ( · ) = θ i T φ i ( · ) + ε i ( · )
where θ i is the optimal weight vector, the reconstruction error | ε i |   ε m , and φ i denotes the basis function vector.
Lemma 4
([25]). For a directed spanning tree, the matrix L + W is a nonsingular matrix.

3. Controller Design and Theoretical Analysis

3.1. Event-Triggered Mechanism with Memory

Firstly, the control law is implemented on a digital computational platform. We formulate the ETC protocol as
u i d ( t ) = u i ( t i , k ) , t [ t i , k , t i , k + 1 ) t i , k + 1 = inf { t > t i , k | | e i u | Ψ i , 1 | u i ( t ) | + Ψ i , 2 ξ i ( t ) }
where u i d is the control input at the triggering instants t i , k . The triggering error of (1) is expressed as e i u = u i d u i ( t ) . Ψ i , 1 , Ψ i , 2 are positive constants. The dynamic parameter ξ i ( t ) is designed as the following two cases.
Case 1: In this case, the dynamic parameters provide a dynamic minimum time interval by their boundedness and the property of ξ i ( t ) > 0 for the triggering mechanism. The dynamic equation of ξ i ( t ) is
ξ i ˙ ( t ) = Q i ξ i ( t ) + Π i e ξ i , h ξ i ( t h ) ξ i ( t h ) + M i
which utilizes the single-point historical information with positive constant Q i , M i , Π i and positive initial value ξ i ( τ ) = i ( τ ) = | ξ 0 | , τ [ h , 0 ] . Note that if we chose V ξ i = 1 2 ξ i 2 and take the derivative of V ξ i , there is
V ˙ ξ i ( Q i Π i 2 1 2 ) ξ i 2 + Π i 2 ξ i 2 ( t h ) + 1 2 M i 2 Q i ˜ V ξ i + Π i 2 sup t h τ t V ξ i ( τ ) + 1 2 M i 2
where Q i ˜ = ( Q i Π i 2 1 2 ) < 0 , Q i > Π i 2 + 1 2 + ϵ i . By the Generalized Halanay inequality [27], the boundedness of ξ i can be obtained. To prove ξ i ( t ) > 0 , we first assume that there exists t ξ i [ 0 , h ] such that ξ i ( t ξ i ) = 0 . If ξ ˙ i ( t ) Q i ξ i ( t ) , t [ 0 , h ] , then ξ i ( t ξ i ) ξ i ( t ξ i 1 ) . Meanwhile ξ i ( t ξ i 1 ) > 0 in [ 0 , t ξ i ) , which leads to a contradiction. Then, ξ i ( t ) > 0 over [ h , ) is proved by repeated analysis for [ j , ( j + 1 ) h ] .
Case 2: It is noted that memory or historical experience can be used as a basis for learning when some situations in the human brain do not have prior knowledge. Based on this idea, we give a dynamic event condition based on single-point historical information, such as case.1, but experience often occurs over a period of time. Therefore, the following dynamic mechanism based on the historical information of moving windows is expressed as
ξ ˙ ( t ) = ϑ 1 ( ξ ( t ) ) + ϑ 2 ( sup s [ t h , t ] | ξ ( s ) | ) + c m | e u |
where the Lipschitz continuous function ϑ 1 belongs to class K and ϑ 2 is a positive function, c m is a positive constant, and h is the memory span. As in the analysis of Case 1, since the last two terms of the above equation are greater than zero, ξ > 0 can be obtained with a positive initial value. To prove its boundedness, let us first define ν ¯ = ϵ ν ( s ) , where ν ( s ) = ϑ 1 ( s ) ϑ 2 ( ρ ( s ) ) is of class K , ϵ ( 0 , 1 ) and ρ ( s ) > s is a continuous nondecreasing function. If ξ ( t ) > ν ¯ 1 ( c m c e u ) for all positive times, then there exists ξ ˙ ( t ) ϑ 1 ( ξ ( t ) ) + ϑ 2 ( sup s [ t h , t ] | ξ ( s ) | ) + ν ¯ ( ξ ( t ) ) . According to ϑ 1 ( s ) ϑ 2 ( ρ ( s ) ) ν ¯ ( s ) = ( 1 ϵ ) ν ( s ) K and Proposition 3 in [29], we can get ξ ( t ) max { ξ ( 0 ) , ρ 1 ( | ξ 0 | ) , ν ¯ 1 ( c m c e u ) } , which guarantees the boundedness of ξ ( t ) .
However, the long-term memory of the human brain is not only limited in capacity but also exhibits inherent biases in its representations. Based on this consideration, we will give the following dynamic mechanism by limiting the memory term for a multi-agent system:
ξ i ˙ ( t ) = Q i ξ i ( t ) + Π i max { 0 , ϱ i ( t ) max s [ t h , t ] | ξ i ( s ) | } + c i m | e i u | ϱ i ˙ ( t ) = 2 L i , 1 ϱ i ( 1 + c ϱ i ) ( L i , 2 2 ϱ i 2 + 1 )
where the initial value ϱ i ( 0 ) = γ ϱ i and L i , 1 , c ϱ i , L i , 2 are positive constants; other parameters have the same meaning as in case.1. For (6), if ϱ i ( t ) < 0 , there is ξ ˙ i ( t ) = Q i ξ i ( t ) + c i m | e i u | , which implies that ξ i ( t ) > 0 and bounded by selecting V ξ i as in case.1. If ϱ i ( t ) 0 , (6) turns into ξ ˙ i ( t ) = Q i ξ i ( t ) + Π i ϱ i ( t ) max s [ t h , t ] | ξ i ( s ) | + c i m | e i u | . Since Π i ϱ i ( t ) max s [ t h , t ] | ξ i ( s ) | 0 , c i m | e i u | 0 , we derive ξ i ( t ) > 0 from ξ ˙ i ( t ) Q i ξ i ( t ) and the above statement. From (6), there is ϱ i ( t ) e 2 L i , 1 t ϱ i ( 0 ) . Then, according to [27,29], the boundedness of ξ i ( t ) obtained by ξ i ( t ) c e i u σ ξ i + ¯ i e μ ξ i t , where c e i u = sup t [ 0 , ) | e i u | , Q i + Π i e 2 L i , 1 t ϱ i ( 0 ) σ ξ i , ¯ i = sup τ [ h , 0 ] i ( τ ) and μ ξ i = inf t 0 { μ ξ i : μ ξ i Q i + Π i e 2 L i , 1 t ϱ i ( 0 ) e μ ξ i h = 0 } .
Remark 1. 
Note that (6) is the specific form of the triggering mechanism of (5) in this paper. Moreover, from Proposition 2.3 in [30] the number of triggerings under the two cases is less than in the dynamic mechanism proposed in [30].
Remark 2. 
The two memory-based event-triggered mechanisms proposed in this section only indirectly use the historical information of the triggering error. The historical information-based triggering mechanism used to evaluate system performance and resource saving will be further considered in the future.

3.2. Distributed Control Protocol

This section details the design of the controller; we first construct the memory-based auxiliary signal ς i , j defined by the dynamic equation
(7a) ς ˙ i , 1 ( t ) = k i , 1 ς i , 1 ( t ) + ( w i + r i ) ς i , 2 ( t ) + β i , 1 e μ i , 1 t ς i , 1 ( t h ) + ( w i + r i ) ( α i , 2 d α i , 1 ) (7b) ς ˙ i , j ( t ) = k i , j ς i , j η i , j ς i , j 1 + ς i , j + 1 + β i , j e μ i , j t ς i , j ( t h ) + ( α i , j + 1 d α i , j ) (7c) ς ˙ i , n ( t ) = k i , n ς i , n ( t ) + β i , n e μ i , n t ς i , n ( t h )
where k i , j , β i , j , μ i , j are known positive constants, the positive constant h is the length of preserved historical information, and ψ i is the initial function ς i ( τ ) = ψ i ( τ ) , τ [ h , 0 ] , ψ i : [ h , 0 ] R + . Moreover, η i , j = w i + r i , j = 2 , 1 , 3 j n .
This mechanism serves to mitigate the inaccuracy arising from the first-order command filter governed by
a i , j α ˙ i , j d ( t ) + α i , j d ( t ) = α i , j 1 ( t ) , α i , j d ( 0 ) = α i , j ( 0 )
where a i , j is the later-defined positive constant and α i , j 1 ( t ) is the virtual controller derived using the backstepping procedure, whereby the consensus tracking error of the ith follower is given by
(9a) e i , 1 = j = 1 N c i j ( y i y j ) + w i ( y i y r ) (9b) e i , j ( t ) = x i , j ( t ) α i , j d ( t ) , j = 2 , , n 1
with α i , 1 d = y r .
Remark 3. 
Meanwhile, the memory span h in the triggering condition and the auxiliary signal ς i , q are design constants that can be set per system requirements. The relationship between h and system state will be further discussed in future work.
To this end, a recursive control design is carried out as follows to obtain the explicit form of the controller.
Step 1: The following function is employed as a Lyapunov candidate:
V i , 1 = 1 2 s i , 1 2 + 1 2 γ i , 1 ν ˜ i , 1 2
with s i , 1 = e i , 1 ς i , 1 denoting the filtered tracking error and ν ˜ i , 1 = ν i , 1 ν ^ i , 1 denoting the estimation error of ν i , 1 , where ν i , 1 = sup θ i , 1 .
From (1), (7a) and (9a), V ˙ 1 is given by
V ˙ i , 1 = s i , 1 ( w i + r i ) ( e i , 2 + α i , 2 d ) + Φ i , 1 ( X ¯ i , 1 ) w i y ˙ r ς ˙ i , 1 + 1 γ i , 1 ν ˜ i , 1 ν ˜ ˙ i , 1
where Φ i , 1 ( X ¯ i , 1 ) = ( w i + r i ) ϕ i , 1 ( x ¯ i , 1 ) j = 1 N c i j x j , 2 + ϕ j , 1 ( x ¯ j , 1 ) is an uncertain function with X ¯ i , 1 = ( x i , 1 , x j , 2 ) . From Lemma 3, there exists an NN such that Φ i , 1 ( X ¯ i , 1 ) = θ i , 1 T φ i , 1 ( X ¯ i , 1 ) + ε i , 1 .
Then, based on the following inequality derived from Lemma 1,
s i , 1 ( θ i , 1 T φ i , 1 ( X ¯ i , 1 ) + ε i , 1 ) s i , 1 ν i , 1 s i , 1 φ i , 1 T φ i , 1 s i , 1 2 φ i , 1 T φ i , 1 + σ i , 1 2 + σ i , 1 ν i , 1 + | s i , 1 | | ε i , 1 |
it holds that
V ˙ i , 1 s i , 1 [ ( w i + r i ) s i , 2 + ( w i + r i ) α i , 1 + k i , 1 ς i , 1 β i , 1 e μ i , 1 t ς i , 1 ( t h ) w i y ˙ r + ν ^ i , 1 s i , 1 φ i , 1 T φ i , 1 s i , 1 2 φ i , 1 T φ i , 1 + σ i , 1 2 ] + σ i , 1 ν i , 1 + | s i , 1 | | ε i , 1 | + s i , 1 ν ˜ i , 1 s i , 1 φ i , 1 T φ i , 1 s i , 1 2 φ i , 1 T φ i , 1 + σ i , 1 2 1 γ i , 1 ν ˜ i , 1 ν ^ ˙ i , 1
where σ i , 1 possesses integrability and is positive for all time and γ i , 1 is a positive constant. Choose the virtual controller α i , 1 and the NN learning law of ν ^ i , 1 as
α i , 1 = 1 w i + r i ( k i , 1 e i , 1 + β i , 1 e μ i , 1 t ς i , 1 ( t h ) ν ^ i , 1 s i , 1 φ i , 1 T φ i , 1 s i , 1 2 φ i , 1 T φ i , 1 + σ i , 1 2 L i , 1 s i , 1 + w i y ˙ r c i , 1 ε sign ( s i , 1 ) )
ν ^ ˙ i , 1 = γ i , 1 ( s i , 1 s i , 1 φ i , 1 T φ i , 1 s i , 1 2 φ i , 1 T φ i , 1 + σ i , 1 2 ι i , 1 ν ^ i , 1 )
where k i , 1 , L i , 1 and c i , 1 ε are positive constants. According to Lemma 3, if c i , 1 ε > ε i , 1 m , it follows that
V ˙ i , 1 ( w i + r i ) s i , 1 s i , 2 k i , 1 s i , 1 2 L i , 1 s i , 1 2 ι i , 1 2 ν ˜ i , 1 2 + ι i , 1 2 ν i , 1 2 + σ i , 1 ν i , 1
Step j  ( 2 j n 1 ) : Take the Lyapunov candidate function as
V i , j = V i , j 1 + 1 2 s i , j 2 + 1 2 γ i , j ν ˜ i , j 2
where s i , j is the compensated error signal given by s i , j = e i , j ς i , j , ν ˜ i , j = ν i , j ν ^ i , j is the unknown parameter estimation error, and γ i , j is positive design parameter.
Use the NN to estimate ϕ i , j ( x ¯ i , j ) and devise the virtual controller α i , j and ν ^ i , j updating law as
α i , j = k i , j e i , j η i , j e i , j 1 + β i , j e μ i , j t ς i , j ( t h ) ν ^ i , j s i , j φ i , j T φ i , j s i , j 2 φ i , j T φ i , j + σ i , j 2 L i , j s i , j + α ˙ i , j d c i , j ε sign ( s i , j )
ν ^ ˙ i , j = γ i , j ( s i , j s i , j φ i , j T φ i , j s i , j 2 φ i , j T φ i , j + σ i , j 2 ι i , j ν ^ i , j )
Based on (7b), (15), (16), (17), and the following inequality derived from Lemma 1,
s i , j ( θ i , j T φ i , j ( x ¯ i , j ) + ε i , j ) s i , j ν i , j s i , j φ i , j T φ i , j s i , j 2 φ i , j T φ i , j + σ i , j 2 + σ i , j ν i , j + | s i , j | | ε i , j |
If c i , j ε > ε i , j m , we could further obtain
V ˙ i , j p = 1 j ( k i , p s i , p 2 L i , p s i , p 2 ι i , p 2 ν ˜ i , p 2 + s i , p s i , p + 1 + p = 1 j ( σ i , p ν i , p + ι i , p 2 ν i , p 2 )
where σ i , j denotes a positive, integrable, time-varying function and ε i , j m denotes the upper bound of ε i , j .
Step n: In this final step, from (1) and (9), the dynamics of e i , n can be obtained:
e ˙ i , n = u i + θ i , n T φ i , n ( x ¯ i , n ) + ε i , n α ˙ i , n d
Select a Lyapunov function candidate V ¯ = V i , n as
V ¯ = V i , n 1 + 1 2 s i , n 2 + 1 2 γ i , n ν ˜ i , n 2
where γ i , n > 0 is definable constant. Taking the derivative of (20), there is
V ˙ i , n = V ˙ i , n 1 + s i , n ( u i d + θ i , n T φ i , n ( x ¯ i , n ) + ε i , n α ˙ i , n d + k i , n ς i , n β i , n e μ i , n t ς i , n ( t h ) ) + 1 γ i , n ν ˜ i , n ν ˜ ˙ i , n
Using (7c), (19), Lemma 3, and a similar argument to that in step i, design
u i = 1 ( 1 Ψ i , 1 ) ( β i , n s i , n ς i , n 2 ( t h ) s i , n 2 ς i , n 2 ( t h ) + σ i , n 2 + s i , n ϖ i , n 2 s i , n 2 ϖ i , n 2 + σ i , n 2 + k i , n s i , n ς i , n 2 s i , n 2 ς i , n 2 + σ i , n 2 )
ν ^ ˙ i , n = γ i , n ( s i , n s i , n φ i , n T φ i , n s i , n 2 φ i , n T φ i , n + σ i , n 2 ι i , n ν ^ i , n )
where ϖ i , n = ν ^ i , n ρ i , n α ˙ i , n d + s i , n 4 + k i , n s i , n + L i , n s i , n , ρ i , n = s i , n φ i , n T φ i , n s i , n 2 φ i , n T φ i , n + σ i , n 2 , and σ i , n > 0 is an integrable time-varying function.
Use the following inequality:
k i , n s i , n ς i , n   k i , n σ i , n + k i , n s i , n 2 ς i , n 2 s i , n 2 ς i , n 2 + σ i , n 2 β i , n e μ i , n t ς i , n ( t h ) s i , n   | β i , n e μ i , n t | ( σ i , n + s i , n 2 ς i , n 2 ( t h ) s i , n 2 ς i , n 2 ( t h ) + σ i , n 2 ) s i , n e i u   Ψ i , 1 s i , n u i ( t ) + s i , n 2 4 + Ψ i , 2 2 ξ i 2
Based on the following inequality derived from Lemma 1,
s i , n ( θ i , n T φ i , n ( x ¯ i , n ) + ε i , n ) s i , n ν i , n s i , n φ i , n T φ i , n s i , n 2 φ i , n T φ i , n + σ i , n 2 + σ i , n ν i , n + | s i , n | | ε i , n |
and together with ETM (4), V ¯ ˙ satisfies
V ¯ ˙ i = 1 N ( k i , n 1 2 ) s i , n 2 ( L i , n 1 2 ) s i , n 2 ι i , n 2 ν ˜ i , n 2 + i = 1 N ( σ i , n ν i , n + ι i , n 2 ν i , n 2 ) + σ i , n ( k i , n + b i , n + 1 ) + Ψ i , 2 2 ξ i ¯ 2 + 1 2 ( ε i , n m ) 2 c ¯ v V i , n + M ¯ V i , n
where c ¯ v = min { k ˜ s i , ι i , n 2 } , k ˜ s i = ( k i , n 1 2 ) + ( L i , n 1 2 ) , M ¯ V i , n = i = 1 n ( σ i , n ν i , n + ι i , n 2 ν i , n 2 ) + σ i , n ( k i , n + b i , n + 1 ) + Ψ i , 2 2 ξ ¯ i 2 + 1 2 ( ε i , n m ) 2 , and ξ ¯ i is the bound of ξ i according to case.1 and case.2. By selecting the parameters k i , n > 1 2 , L i , n > 1 2 , ι i , n > 0 , the boundedness of signals s i , n and ν ˜ i , n are proved because V ¯ ( t ) e c ¯ v t V ¯ ( 0 ) + M ¯ V i , n c ¯ v < .
Remark 4. 
Recently, a dynamically adjusted ETM t i , k + 1 = inf { t R | | e s | δ 1 | u i ( t ) | + δ 2 e β t } was proposed in [23], which avoids the conservatism in control design caused by the ETM proposed in [28], which has a residual term. However, neither of them considers the exploration of historical information. The new controller devised in this paper avoids the use of the residual term in [28] and also utilizes historical information for further saving resources.
Remark 5. 
It is noted that the s i g n ( ) function introduced in control design induces inherent chattering due to its intrinsic non-smoothness and can be replaced by a s a t ( ) function or other functions with equivalent control properties. For selection of the upper bound of ε i , j m , only a relatively conservative large upper bound is adopted in the simulation. Both aspects are ultimately subsumed into the second term of inequality (23) and thus do not affect the stability analysis.

3.3. System Stability and Zeno Behavior Analysis

For depicting the tracking performance of the controller, we also need to characterize the bounded nature of the auxiliary signal ς i , j and the tracking error e i , j .
Proposition 1. 
Utilizing the fact that α i , j + 1 d α i , j λ i , where λ i are positive constants, the boundedness of the auxiliary signal ς i defined in (7a), (7b), and (7c) can be obtained.
Proof. 
Define V ς = i = 1 N j = 1 n 1 2 ς i , j 2 as a Lyapunov function of ς i , j . Along the solution of (7a), (7b), and (7c), V ˙ ς is obtained, where
V ˙ ς = i = 1 N j = 1 n ( k i , j ς i , j 2 ( t ) + ς i , j ( t ) ς i , j + 1 ( t ) + β i , j e μ i , j t ς i , j ( t ) ς i , j ( t h ) + ς i , j ( t ) ( α i , j + 1 d ( t ) α i , j ( t ) ) )
Noting that 0 < e μ i , j t 1 , by Young’s inequality, we can get
V ˙ ς i = 1 N j = 1 n ( k i , j 1 1 2 κ i , j β i , j 2 ) ς i , j 2 ( t ) + β i , j 2 ς i , j 2 ( t h ) + κ i , j 2 e α i , j 2 k ˜ V ς + β ˜ 2 sup t h τ t V ς ( τ ) + κ ˜ 2 e α i , j 2
where e α i , j = α i , j + 1 d α i , j , k ˜ = min { 2 k ˜ i , j , j = 1 , , n } , k ˜ i , j = k i , j 1 1 2 κ i , j β i , j 2 , β ˜ 2 = max { β i , j 2 } and κ ˜ 2 = max { κ i , j 2 } .
In view of [22] and the Generalized Halanay inequality [27], the boundedness of ς i , j can be obtained by selecting k i , j > 1 + 1 2 κ i , j + β i , j 2 with κ i , j > 0 , β i , j > 0 . □
Remark 6. 
The condition α i + 1 d α i λ i is common, as in [24,31] and the reference therein.
The following theorem describes the key conclusions of this paper and proves the stability of system (1) under the execution of control tasks.
Theorem 1. 
Consider the nonlinear MASs (1) under the event-triggered adaptive NN controller composed of (12), (16), and (21), with the NN learning law given by (17) and (22). If Assumption 1 holds, then the signal e i , j is uniformly ultimately bounded, and no Zeno behavior occurs.
Proof. 
From (23), the boundedness of signals s i , j , ν ˜ i , j is obtained. Together with the boundedness of ς i , j , from the definition s i , j = e i , j ς i , j , we can conclude that e i 2 2 s i 2 + 2 ζ i 2 2 2 e c v t V ¯ ( 0 ) + M ¯ V i , n c v + M ζ i = M e i . By defining e 1 = [ e 1 , 1 , , e N , 1 ] , we have
e 1 = ( L + W ) δ
where δ = y 1 N y r with y r R , 1 N = [ 1 , , 1 ] R N , and y = [ y 1 , , y N ] T R N denotes the stacked tracking error vector between followers’ outputs and the leader output y r . W = diag { w i } . Combining Assumption 2 and the result from Lemma 4, which establishes the nonsingularity of L + W , it follows that δ ( L + W ) 1 e 1 ( L + W ) 1 M e i .
Then, we prove that infinite sampling does not occur. From | e i u | = | u i d u i ( t ) | , t i [ t i , k , t i , k + 1 ) , we have
d d t | e i u | = s i g n ( e i u ) e ˙ i u | u ˙ i |
With the analysis mentioned above and [22], we know that u i is bounded because u i is composed of bounded signals. Thus, there is d d t | e i u | u ¯ i since | u ˙ i | u ¯ i , where u ¯ i is a positive constant. Then, we have | e i u | u ¯ i ( t t i , k ) , t i [ t i , k , t i , k + 1 ) . And together with the event-triggered protocol defined in (4), it is obtained that Ψ i , 2 ξ i ( t 0 ) e Q i ( t i , k + 1 t 0 ) Ψ i , 2 ξ i ( t i , k + 1 ) | e i u ( t i , k + 1 ) | u ¯ i ( t i , k + 1 t i , k ) = u ¯ i T k i , from which follows that T k i Ψ i , 2 u ¯ i ξ i ( t 0 ) e Q i ( t i , k + 1 t 0 ) . If lim k T k i = 0 , then from the above expression, we get 0 < Ψ i , 2 ξ i ( t 0 ) e Q i ( t i t 0 ) u ¯ i T k i = 0 , which is a contradiction. Thus, we can conclude that T k i > 0 . This indicates that Zeno behavior is excluded. This completes the proof. □

4. Simulation

To evaluate the performance of the proposed scheme, we examine a multi-agent system comprising one leader and five followers, each modeled as [32]
x ˙ i , 1 = x i , 2 + 0.12 x i , 1 sin ( x i , 1 ) x ˙ i , 2 = u i + 0.8 sin ( x i , 1 x i , 2 ) i = 1 , 2 , 3 , 4 , 5 y i = x i . 1
And the leader’s trajectory is generated by y r ( t ) = 0.6 + 0.3 sin ( 0.15 t ) .
The multi-agent system adopts a directed communication topology. The adjacency and leader-coupling matrices are
B = 0 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 , W = diag ( 1 , 0 , 0 , 0 , 0 ) .
In the simulations, a Gaussian RBF is adopted to approximate the unknown nonlinearities. The NN input is the local state ( y i , v i ) , and M = 7 Gaussian nodes are used per backstepping layer (the same RBF configuration is applied to both layers). The centers are fixed at ( 0.8 , 0.8 ) , ( 0.8 , 0.8 ) , ( 0 , 0 ) , ( 0.8 , 0.8 ) , ( 0.8 , 0.8 ) , ( 0.4 , 0.4 ) , and ( 0.4 , 0.4 ) , with the common width b w = 0.9 . The scalar adaptive parameters are initialized as ν ^ i , 1 ( 0 ) = ν ^ i , 2 ( 0 ) = 0.5 , and the learning gains/leakage coefficients are chosen as γ 1 = γ 2 = 6.0 and 1 = 2 = 0.35 . Moreover, the regularization parameters are set as [ σ i , 1 , σ i , 2 ] = [ 0.08 , 0.08 ] , σ min = 10 4 , and σ decay = 0.25 .
The chosen values for the design parameters are [ k i , 1 , k i , 2 ] = [ 8.0 , 9.0 ] , [ L i , 1 , L i , 2 ] = [ 2.5 , 3.0 ] , [ c i , 1 ϵ ] = [ 0.06 ] , [ γ i , 1 , γ i , 2 ] = [ 6.0 , 6.0 ] , and [ i , 1 , i , 2 ] = [ 0.35 , 0.35 ] , and the command filter parameter is a 2 = 0.06 . The history-based compensation weights and their exponential decay rates are selected as [ β i , 1 , β i , 2 ] = [ 0.35 , 0.35 ] and [ μ i , 1 , μ i , 2 ] = [ 0.12 , 0.12 ] . The auxiliary-signal gains are chosen as [ k ζ 1 , k ζ 2 ] = [ 6.0 , 6.0 ] . The event-trigger threshold coefficients are set as [ Ψ i , 1 , Ψ i , 2 ] = [ 0.15 , 0.35 ] , and the input saturation bound is chosen as U SAT = 40 . The delay length is set to h = 0.1 .
To explicitly encode a memoryless configuration by parameter choice, we set a constant threshold ξ = 0.0018 . For case 1, the parameters are Q 1 = 2.4 , Π 1 = 1.8 , ω 1 = 1.2 , and M 1 = 0.01 . For case 2, the parameters are Q 2 = 0.18 , Π 2 = 1.8 , c 2 m = 0.35 , L 1 = 0.06 , L 2 = 0.06 , c ρ = 0.25 , and ϱ ( 0 ) = 6.0 . The initial states are selected as x 1 ( 0 ) = 0.10 , x 2 ( 0 ) = 0.20 , x 3 ( 0 ) = 0.00 , x 4 ( 0 ) = 0.10 , and x 5 ( 0 ) = 0.15 , with initial velocities v 1 ( 0 ) = v 2 ( 0 ) = v 3 ( 0 ) = v 4 ( 0 ) = v 5 ( 0 ) = 0.00 . The initial auxiliary signals and command ilter states are set to ζ i , 1 ( 0 ) = ζ i , 2 ( 0 ) = 0 and α i , 2 d ( 0 ) = 0 for all i. The initial adaptive parameters are chosen as ν ^ i , 1 ( 0 ) = ν ^ i , 2 ( 0 ) = 0.5 . Simulation results are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
In this numerical experiment, to validate the practicability of the devised control scheme and to benchmark its performance against a memoryless mechanism, numerical investigations are carried out on a family of nonlinear multi-agent systems. Figure 1 and Figure 2 show the tracking effect of the system and prove that the control system’s output is capable of tracking the leader’s trajectory. Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show that the Zeno phenomenon does not occur in the proposed ETM based on historical information. Figure 10 and Figure 11 present a comparison of the total count of triggering events. Compared with the no-memory mechanism in Figure 10b, the introduction of single-point historical information in Figure 11a can reduce the number of trigger times. Comparing Figure 11a with Figure 11b shows that the introduction of moving window historical information can further reduce the number of trigger times, thereby validating the efficacy of the proposed method in enhancing resource efficiency.
Table 1 first reports the parameter sensitivity results. In case 2, varying the limiter initial value ϱ ( 0 ) = 3 , 6 , 9 leads to only a slight change in the total number of triggering events N total (from 4967 to 5058), while the average inter-event time T avg all remains around 0.079 0.081 s, indicating a weak influence of ϱ ( 0 ) within the tested range. A similar limited variation is observed when sweeping the memory length h = 0.05 , 0.10 , 0.20 : for case 1, N total decreases from 5692 to 5545 and T avg all increases slightly; for case 2, N total varies between 4902 and 5058 and T avg all stays around 0.079 0.082 s. Table 2 summarizes the overall and per-agent statistics. Compared with the no-memory scheme ( N total = 7194 ), case 1 reduces N total to 5611 (22.0% saving), and case 2 further reduces it to 5058 (29.7% saving); all agents exhibit fewer triggering events, and case 2 achieves higher savings in general. Meanwhile, the minimum inter-event time is T min all = 0.001 s for all three schemes (equal to the simulation step size), whereas the average inter-event time increases from T avg all = 0.055628 s to 0.071302 s (case 1) and 0.079058 s (case 2), consistent with the reduction in triggering events.

5. Conclusions

This work addresses the consensus tracking problem for multi-agent systems with uncertainties. A distributed event-driven control protocol based on historical data is proposed to achieve the control objective. First, a dynamic event-triggered scheme with single-point memory is devised. To further exploit historical information, a dynamic event-triggered mechanism incorporating moving window memory is developed, where a dynamic parameter is introduced to constrain the maximum memory length, thereby mitigating potential deviations induced by long-term memory. Second, by constructing a novel error compensation mechanism, virtual control signals and a distributed control law are designed to ensure the boundedness of all closed-loop signals. Finally, simulation results validate the effectiveness of the proposed historical-data-based event-triggered control method in reducing the number of triggering events. Future work will focus on extending the proposed method to more complex scenarios, aiming at the safe control of multiple UAVs.

Author Contributions

X.L.: conceptualization, methodology, writing—original draft preparation, and funding acquisition. H.W.: writing—reviewing and Editing. Q.-Y.F.: writing—reviewing and editing, and responsible for ensuring that the descriptions are accurate and agreed upon by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Young Scientists Fund of the National Natural Science Foundation of China under Grant 62403391 as well as the Young Scientists Fund of Sichuan Natural Science Foundation under Grant 2025ZNSFSC1516. It was funded in part by the National Natural Science Foundation of China under Grant 62473313 and Grant U23A20337 and in part by the Shaanxi Provincial Natural Science Basic Research Program under Grant 2024JC-YBMS-469.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

SymbolMeaning
NNumber of followers; i { 1 , , N } is the agent index.
x i R n State of agent i; n is the system order.
y i , u i Output and continuous control input of agent i.
y r ( t ) Leader/reference trajectory.
y = [ y 1 , , y N ] Stacked follower outputs.
δ = y 1 N y r Output tracking error vector.
G Directed communication graph.
B = [ c i j ] Adjacency/weight matrix ( c i j > 0 if j i ).
R = diag ( r i ) In-degree matrix, r i = j c i j .
L = R B Laplacian matrix.
W = diag ( w i ) Pinning (leader-coupling) gain matrix.
e i , 1 First-layer consensus tracking error (neighbor differences + pinning).
e 1 = [ e 1 , 1 , , e N , 1 ] Stacked error satisfying e 1 = ( L + W ) δ .
ς i , j Auxiliary/history-based compensation signal at layer j.
s i , j = e i , j ς i , j Compensated (filtered) error at layer j.
α i , j , α i , j d Virtual control and its command-filtered version.
t i , k k-th triggering instant of agent i.
u i d ( t ) = u i ( t i , k ) Applied/held input between events.
e i , u = u i d ( t ) u i ( t ) Triggering measurement error.
Ψ i , 1 , Ψ i , 2 Trigger threshold coefficients.
ξ i ( t ) Dynamic triggering parameter (Cases 1–2).
ϱ i ( t ) Memory limiter/weight (used in Case 2).
ϕ ( · ) RBF basis vector; M is the number of neurons.
ε ( · ) NN approximation error, bounded by ε m .
ν ^ i , j Adaptive parameter (minimal learning) at layer j.

References

  1. Ren, Y.; Wang, Q.; Duan, Z. Optimal Distributed Leader-Following Consensus of Linear Multi-Agent Systems: A Dynamic Average Consensus-Based Approach. IEEE Trans. Circuits Syst. II 2022, 69, 1208–1212. [Google Scholar] [CrossRef]
  2. Su, S.; Lin, Z. Distributed Consensus Control of Multi-Agent Systems with Higher Order Agent Dynamics and Dynamically Changing Directed Interaction Topologies. IEEE Trans. Autom. Control 2016, 61, 515–519. [Google Scholar] [CrossRef]
  3. Wang, Q.; Duan, Z.; Wang, J. Distributed Optimal Consensus Control Algorithm for Continuous-Time Multi-Agent Systems. IEEE Trans. Circuits Syst. II 2020, 67, 102–106. [Google Scholar] [CrossRef]
  4. Kada, B.; Khalid, M.; Shaikh, M.S. Distributed Cooperative Control of Autonomous Multi-agent UAV Systems Using Smooth Control. J. Syst. Eng. Electron. 2020, 31, 1297–1307. [Google Scholar] [CrossRef]
  5. Cui, Y.; Liang, Y.; Luo, Q.; Shu, Z.; Huang, T. Resilient Consensus Control of Heterogeneous Multi-UAV Systems with Leader of Unknown Input Against Byzantine Attacks. IEEE Trans. Autom. Sci. Eng. 2025, 22, 5388–5399. [Google Scholar] [CrossRef]
  6. Wang, X.; Wu, J.; Wang, X. Distributed Attitude Consensus of Spacecraft Formation Flying. J. Syst. Eng. Electron. 2013, 24, 296–302. [Google Scholar] [CrossRef]
  7. Olfati-Saber, R.; Shamma, J.S. Consensus Filters for Sensor Networks and Distributed Sensor Fusion. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 15 December 2005; pp. 6698–6703. [Google Scholar]
  8. Yu, W.; Chen, G.; Wang, Z.; Yang, W. Distributed Consensus Filtering in Sensor Networks. IEEE Trans. Syst. Man Cybern. B 2009, 39, 1568–1577. [Google Scholar]
  9. Abbasi, M.; Marquez, H.J. Observer-Based Event-Triggered Consensus Control of Multi-Agent Systems with Time-Varying Communication Delays. IEEE Trans. Autom. Sci. Eng. 2024, 21, 6336–6346. [Google Scholar] [CrossRef]
  10. Zou, H.; Zhang, G. Dynamic Event-Triggered-Based Single-Network ADP Optimal Tracking Control for the Unknown Nonlinear System with Constrained Input. Neurocomputing 2023, 518, 294–307. [Google Scholar] [CrossRef]
  11. Wang, P.; Wang, Z.; Xu, H. Integral-Based Memory Event-Triggered Controller Design for Uncertain Neural Networks with Control Input Missing. Mathematics 2025, 13, 791. [Google Scholar] [CrossRef]
  12. Ren, X.M.; Rad, A.B. Identification of Nonlinear Systems with Unknown Time Delay Based on Time-Delay Neural Networks. IEEE Trans. Neural Netw. 2007, 18, 1536–1541. [Google Scholar] [CrossRef] [PubMed]
  13. Li, T.; Li, Z.; Wang, D.; Chen, C.L. Output-Feedback Adaptive Neural Control for Stochastic Nonlinear Time-Varying Delay Systems with Unknown Control Directions. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1188–1201. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, Y.; Liu, Y.; Wu, J.; Tao, J.; Li, M.; Su, C.; Lu, R. Adaptive Neural Network Tracking Control for Unknown High-Order Nonlinear Systems: A Constructive Approximation Set Based Approach. Eng. Appl. Artif. Intell. 2025, 162, 112371. [Google Scholar] [CrossRef]
  15. Gao, S.; Wang, J.; Ren, S.; Peng, B. Performance-Barrier-Based Event-Triggered Leader–Follower Consensus Control for Nonlinear Multi-Agent Systems. Neurocomputing 2025, 657, 131664. [Google Scholar] [CrossRef]
  16. Shi, J. Cooperative Control for Nonlinear Multi-Agent Systems Based on Event-Triggered Scheme. IEEE Trans. Circuits Syst. II 2021, 68, 1977–1981. [Google Scholar] [CrossRef]
  17. Choi, Y.H.; Yoo, S.J. Neural-Network-Based Distributed Asynchronous Event-Triggered Consensus Tracking of a Class of Uncertain Nonlinear Multi-Agent Systems. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 2965–2979. [Google Scholar] [CrossRef]
  18. Tong, S.; Zhou, H.; Li, Y. Neural Network Event-Triggered Formation Fault-Tolerant Control for Nonlinear Multiagent Systems With Actuator Faults. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 7571–7582. [Google Scholar] [CrossRef]
  19. Gao, F.; Chen, W.; Li, Z.; Li, J.; Xu, B. Neural Network-Based Distributed Cooperative Learning Control for Multiagent Systems via Event-Triggered Communication. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 407–419. [Google Scholar] [CrossRef]
  20. Peng, Z.; Luo, R.; Hu, J.; Shi, K.; Ghosh, B.K. Distributed Optimal Tracking Control of Discrete-Time Multiagent Systems via Event-Triggered Reinforcement Learning. IEEE Trans. Circuits Syst. I 2022, 69, 3689–3700. [Google Scholar] [CrossRef]
  21. Wang, X.A.; Zhang, G.J.; Niu, B.; Wang, D.; Wang, X.M. Event-Triggered-Based Consensus Neural Network Tracking Control for Nonlinear Pure-Feedback Multiagent Systems with Delayed Full-State Constraints. IEEE Trans. Autom. Sci. Eng. 2024, 21, 7390–7400. [Google Scholar] [CrossRef]
  22. Liu, Y.; Chen, Y. Dynamic Memory Event-triggered Adaptive Control for a Class of Strict-Feedback Nonlinear Systems. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3470–3474. [Google Scholar] [CrossRef]
  23. Li, Y.; Li, Y.X.; Tong, S. Event-Based Finite-Time Control for Nonlinear Multiagent Systems with Asymptotic Tracking. IEEE Trans. Autom. Control 2023, 68, 3790–3797. [Google Scholar] [CrossRef]
  24. Li, Y.X. Command Filter Adaptive Asymptotic Tracking of Uncertain Nonlinear Systems with Time-Varying Parameters and Disturbances. IEEE Trans. Autom. Contr. 2022, 67, 2973–2980. [Google Scholar] [CrossRef]
  25. Ren, W.; Cao, Y. Distributed Coordination of Multi-Agent Networks: Emergent Problems, Models, and Issues; Springer Science Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  26. Li, Y.X.; Yang, G.H. Adaptive Asymptotic Tracking Control of Uncertain Nonlinear Systems with Input Quantization and Actuatorfaults. Automatica 2016, 72, 177–185. [Google Scholar] [CrossRef]
  27. Cai, Z.; Huang, L. Generalized Lyapunov Approach for Functional Differential Inclusions. Automatica 2020, 113, 108740. [Google Scholar] [CrossRef]
  28. Liang, H.; Liu, G.; Zhang, H.; Huang, T. Neural-Network-Based Event-Triggered Adaptive Control of Nonaffine Nonlinear Multiagent Systems With Dynamic Uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2239–2250. [Google Scholar] [CrossRef]
  29. Davó, M.A.; Prieur, C.; Fiacchini, M. Stability Analysis of Output Feedback Control Systems with a Memory-Based Event-Triggering Mechanism. IEEE Trans. Autom. Contr. 2017, 62, 6625–6632. [Google Scholar] [CrossRef]
  30. Girard, A. Dynamic Triggering Mechanisms for Event-Triggered Control. IEEE Trans. Autom. Contr. 2015, 60, 1992–1997. [Google Scholar] [CrossRef]
  31. Zhao, L.; Yu, J.; Lin, C.; Ma, Y. Adaptive Neural Consensus Tracking for Nonlinear Multiagent Systems using Finite-Time Command Filtered Backstepping. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2003–2012. [Google Scholar] [CrossRef]
  32. Liu, R.; Xing, L.; Zhong, Y.; Deng, H.; Zhong, W. Adaptive Fixed-Time Fuzzy Containment Control for Uncertain Nonlinear Multiagent Systems with Unmeasurable States. Sci. Rep. 2024, 14, 15785. [Google Scholar] [CrossRef]
Figure 1. Paths of y r and y ˙ r : (a) The path tracking of y r . (b) The speed path tracking of y ˙ r .
Figure 1. Paths of y r and y ˙ r : (a) The path tracking of y r . (b) The speed path tracking of y ˙ r .
Mathematics 14 00261 g001
Figure 2. Tracking error of e y and e i , 1 : (a) Tracking error of e y . (b) Tracking error of e i , 1 .
Figure 2. Tracking error of e y and e i , 1 : (a) Tracking error of e y . (b) Tracking error of e i , 1 .
Mathematics 14 00261 g002
Figure 3. Triggering time intervals of agent 1: (a) Triggering time intervals of agent 1 without memory. (b) Triggering time intervals of agent 1 for case 1 with memory.
Figure 3. Triggering time intervals of agent 1: (a) Triggering time intervals of agent 1 without memory. (b) Triggering time intervals of agent 1 for case 1 with memory.
Mathematics 14 00261 g003
Figure 4. Triggering time intervals of agent 1 and agent 2: (a) Triggering time intervals of agent 1 for case 2 with memory. (b) Triggering time intervals of agent 2 without memory.
Figure 4. Triggering time intervals of agent 1 and agent 2: (a) Triggering time intervals of agent 1 for case 2 with memory. (b) Triggering time intervals of agent 2 without memory.
Mathematics 14 00261 g004
Figure 5. Triggering time intervals of agent 2: (a) Triggering time intervals of agent 2 for case 1 with memory. (b) Triggering time intervals of agent 2 for case 2 with memory.
Figure 5. Triggering time intervals of agent 2: (a) Triggering time intervals of agent 2 for case 1 with memory. (b) Triggering time intervals of agent 2 for case 2 with memory.
Mathematics 14 00261 g005
Figure 6. Triggering time intervals of agent 3: (a) Triggering time intervals of agent 3 without memory. (b) Triggering time intervals of agent 3 for case 1 with memory.
Figure 6. Triggering time intervals of agent 3: (a) Triggering time intervals of agent 3 without memory. (b) Triggering time intervals of agent 3 for case 1 with memory.
Mathematics 14 00261 g006
Figure 7. Triggering time intervals of agent 3 and agent 4: (a) Triggering time intervals of agent 3 for case 2 with memory. (b) Triggering time intervals of agent 4 without memory.
Figure 7. Triggering time intervals of agent 3 and agent 4: (a) Triggering time intervals of agent 3 for case 2 with memory. (b) Triggering time intervals of agent 4 without memory.
Mathematics 14 00261 g007
Figure 8. Triggering time intervals of agent 4: (a) Triggering time intervals of agent 4 for case 1 with memory. (b) Triggering time intervals of agent 4 for case 2 with memory.
Figure 8. Triggering time intervals of agent 4: (a) Triggering time intervals of agent 4 for case 1 with memory. (b) Triggering time intervals of agent 4 for case 2 with memory.
Mathematics 14 00261 g008
Figure 9. Triggering time intervals of agent 5: (a) Triggering time intervals of agent 5 without memory. (b) Triggering time intervals of agent 5 for case 1 with memory.
Figure 9. Triggering time intervals of agent 5: (a) Triggering time intervals of agent 5 without memory. (b) Triggering time intervals of agent 5 for case 1 with memory.
Mathematics 14 00261 g009
Figure 10. Triggering time intervals of agent 5 and total count of triggering events: (a) Triggering time intervals of agent 5 for case 2 with memory. (b) Total count of triggering events without memory.
Figure 10. Triggering time intervals of agent 5 and total count of triggering events: (a) Triggering time intervals of agent 5 for case 2 with memory. (b) Total count of triggering events without memory.
Mathematics 14 00261 g010
Figure 11. Total count of triggering events: (a) Total count of triggering events for case 1 with memory. (b) Total count of triggering events for case 2 with memory.
Figure 11. Total count of triggering events: (a) Total count of triggering events for case 1 with memory. (b) Total count of triggering events for case 2 with memory.
Mathematics 14 00261 g011
Table 1. Parametric studies of triggering performance.
Table 1. Parametric studies of triggering performance.
(1) Limiter Initial Value  ϱ ( 0 )  (Case 2)
ϱ ( 0 ) N total T min all T avg all
3.0049670.0010000.080583
6.0050580.0010000.079058
9.0049990.0010000.080030
(2) Memory Length h
Case 1 (Case1)Case 2 (Case2)
h N total T min all T avg all N total T min all T avg all
0.05056920.0010000.07029649310.0010000.081162
0.10056110.0010000.07130250580.0010000.079058
0.20055450.0010000.07206349020.0010000.081610
Table 2. Summary triggering statistics (overall and per-agent).
Table 2. Summary triggering statistics (overall and per-agent).
Overall Triggering-Time Statistics
Method N total T min all T avg all
No memory71940.0010000.055628
Case 1 (single-point memory)56110.0010000.071302
Case 2 (moving window memory)50580.0010000.079058
Per-Agent Triggering Counts and Savings
Agenti N i  (No-Memory) N i  (Case1) N i  (Case2)Saving (Case1)Saving (Case2)
196476468120.75%29.36%
21401111697620.34%30.34%
31319105690819.94%31.16%
417321342125522.51%27.54%
517781333123825.03%30.37%
Total71945611505822.0%29.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Wang, H.; Fan, Q.-Y. Event-Triggered Adaptive Control for Multi-Agent Systems Utilizing Historical Information. Mathematics 2026, 14, 261. https://doi.org/10.3390/math14020261

AMA Style

Liu X, Wang H, Fan Q-Y. Event-Triggered Adaptive Control for Multi-Agent Systems Utilizing Historical Information. Mathematics. 2026; 14(2):261. https://doi.org/10.3390/math14020261

Chicago/Turabian Style

Liu, Xinglan, Hongmei Wang, and Quan-Yong Fan. 2026. "Event-Triggered Adaptive Control for Multi-Agent Systems Utilizing Historical Information" Mathematics 14, no. 2: 261. https://doi.org/10.3390/math14020261

APA Style

Liu, X., Wang, H., & Fan, Q.-Y. (2026). Event-Triggered Adaptive Control for Multi-Agent Systems Utilizing Historical Information. Mathematics, 14(2), 261. https://doi.org/10.3390/math14020261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop