Next Article in Journal
Research on the Frequency Modulation Micro-Electro-Mechanical System Electric Field Sensor
Previous Article in Journal
Combinatorial Solutions to the Social Golfer Problem and the Social Golfer Problem with Adjacent Group Sizes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Periodic Event-Triggered Consensus Relying on Previous Information in Leader-Following Multi-Agent Systems

School of Mathematics and Physics, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(2), 271; https://doi.org/10.3390/sym18020271
Submission received: 3 December 2025 / Revised: 25 January 2026 / Accepted: 29 January 2026 / Published: 31 January 2026

Abstract

In this paper, we consider the consensus of leader-following multi-agent systems. We extend a window-based periodic event-triggered strategy to leader-following multi-agent systems. This strategy was originally proposed by Seidel et al. for leaderless consensus. We give an upper bound on the evolution of the Lyapunov function and present the consensus condition for our considered leader-following system. This is accomplished by adding the previous information to the general triggering conditions and allowing an increase in the Lyapunov function within a local range as long as a decreasing tendency is still guaranteed. Finally, some simulation examples are given to demonstrate that the proposed approach ensures system consensus while reducing the communication costs.

1. Introduction

Distributed multi-agent systems (MASs) are composed of multiple autonomous agents that interact with their neighbors. The consensus problem, which requires the states of all agents to eventually converge to a common value, is solved by designing control strategies that rely on the states of their neighbors. This issue is relevant to many practical applications [1,2,3].
In recent years, control methods for nonlinear multi-agent systems have been extensively studied, such as adaptive back-stepping control based on neural network approximation [4], group learning dynamic modeling based on the governing equations [5], and finite-time optimal fault-tolerant control combined with reinforcement learning [6]. However, nonlinear methods often come with challenges, such as high computational complexity, difficult parameter tuning, and complex theoretical analysis, especially in scenarios with large system scales or high real-time requirements, where they may be limited. In contrast, linear system models have advantages such as clear structures, complete analysis tools, and broad applications. Therefore, they have received extensive attention from researchers.
In early research, continuous-time strategies were mainly used [7,8,9,10,11]. These methods require agents to continuously communicate and update control, which places significant pressure on the communication and computational resources of MASs. Especially in large-scale networks, such update strategies not only lead to unnecessary energy consumption but are also difficult to implement in practical applications. To reduce the communication costs and enhance the practical availability of multi-agent systems, some researchers have adopted an update strategy consisting of periodic sampling [12,13]. Under this periodic sampling strategy, each agent updates its state according to the prescribed period. In the context of periodic sampling, relevant research has achieved progress. Based on periodic sampling, some researchers have proposed periodic event-triggered control (PETC) strategies. Unlike periodic sampling control strategies, this control strategy only updates the agents’ states when specific triggering conditions are met. Additionally, some researchers have discussed the finite-time scaled consensus problem for hybrid multi-agent systems comprising both continuous-time and discrete-time dynamic agents. For example, in [14], based on the conjugate gradient method, the problem of finite-time proportional consensus for hybrid multi-agent systems was studied, while, in [15], using pulse modulation control, the problem of time-varying proportional synchronization for edge systems with mixed dynamics was explored.
The PETC method has been increasingly applied in consensus regarding multi-agent systems [16,17,18,19,20,21,22,23,24]. For example, a PETC algorithm that discretizes continuous-time MASs based on a zero-order hold to ensure consensus among agent states was proposed for multi-agent systems with an undirected graph topology [20]. A PETC strategy for consensus tracking control in stochastic MASs with a switching topology was proposed in [22]. In [22], research on consensus effectively reduced the communication burden during the tracking process by combining event-triggering mechanisms with distributed observers. In various works, the leader-following MAS has been widely studied. In leader-following systems, it is often required that the states of all followers eventually converge to that of the leader, enabling the system to complete collaborative tasks such as formation control and target tracking. However, the existence of leaders also imposes higher demands regarding the efficiency of cooperation and resource consumption among followers. In the existing literature, the results regarding PETC consensus are mostly based on standard Lyapunov theory. Under these triggering strategies, the Lyapunov function is guaranteed to decrease monotonically [23]. To ensure this property, the design of trigger conditions is often overly conservative. For example, in [24], a consensus control strategy for leader-following multi-agent systems with time-varying delays was proposed. The consensus was achieved by trigger rules, ensuring a monotone decrease in the Lyapunov function. However, this leads to an excessively high communication cost. To reduce the update frequency, some researchers have proposed various improvement strategies, relaxing the monotonically decreasing requirements for Lyapunov functions. For the general event-triggered control setup, a periodic event-triggered control strategy based on non-monotonic Lyapunov functions was proposed in [25], which allowed the Lyapunov function to temporarily increase under the premise of ensuring a decreasing trend. Similar ideas were also proposed in [26], which proves that this method can effectively reduce the amount of data transmission. Motivated by [25,26], reference [27] proposed a novel window-based method for multi-agent systems without leaders. This method is achieved by considering the previous system behavior and allows for the utilization of the remaining conservatism in previous time steps.
In this paper, we extend the window-based non-monotonic Lyapunov method that was introduced in [27] to leader-following systems. For our considered systems, we propose a periodic event-triggered control protocol. For each follower, the event-triggering conditions are designed by adding the previous state information of its neighbors. We provide an upper bound for the Lyapunov function and allow an increase in the Lyapunov function within a local range as long as a decreasing tendency is still guaranteed. Subsequently, we provide a rigorous convergence analysis and use some simulation examples to verify the effectiveness of the method.
The remainder of this paper is organized as follows: Section 2 presents the problem formulation and necessary mathematical preliminaries; Section 3 details the design of a time interval-based event-triggered control protocol and provides the consensus analysis; Section 4 validates the proposed method through numerical simulations; and Section 5 concludes the paper with potential future research directions.

2. Problem Formulation

2.1. Preliminaries

In multi-agent systems, communication among agents is essential for agents to reach consensus. A multi-agent system can be modeled as a graph topology of the network. Agents are represented as vertices in the graph, and edges denote communication between agents. Here, for convenience, we briefly define some notations and basic concepts that will be used in this paper.
Let G = ( V , E , A ) be a finite undirected weighted graph without self-loops, where V = { v 1 , v 2 , , v n } is the vertex set of graph G, E V × V = { ( v i , v j ) | i , j { 1 , 2 , , n } , i j } is the edge set, and A is the weighted adjacency matrix. For an undirected graph, each edge connects two vertices without a specified direction; then, the edges ( v i , v j ) and ( v j , v i ) are considered as the same edge. Two vertices v i and v j associated with the same edge are called adjacent vertices or neighbors. The set of neighbors of vertex v i is denoted by N i = { v j | ( v i , v j ) E } . A graph G is said to be connected if there is at least one path between every pair of vertices in G, i.e., a sequence of edges connecting them. The adjacency matrix A = [ a i j ] n × n is a non-negative symmetric matrix with a i j > 0 , if and only if ( v i , v j ) E , and a i j = 0 otherwise. Each entry a i j of A quantifies the strength or characteristics of the relationship between the vertices v i and v j . The degree of vertex v i is defined as the sum of all adjacent edges’ weights of v i , i.e., d i = j N i a i j . We define the degree matrix as the diagonal matrix D = d i a g [ d 1 , d 2 , , d n ] with each vertex’s degree on the diagonal. The Laplacian matrix L = [ l i j ] n × n of undirected graph G = ( V , E , A ) is defined by L = D A , where l i i = d i and l i j = a i j ( i j ) . The Laplacian matrix L is symmetric and positive semi-definite and therefore it has n real eigenvalues ordered as 0 = λ 1 λ 2 λ n . If graph G is connected, then the Laplacian matrix L has a single zero eigenvalue with multiplicity of one, and λ 2 is the smallest positive eigenvalue of L. Note that every row and column sum of L is zero; it follows that the vector of all ones 1 [ 1 , 1 , , 1 ] T R n is an eigenvector of L associated with the eigenvalue λ 1 = 0 . For any real symmetric n × n matrix like L, L can be orthonormally diagonalized, i.e., there exists an orthonormal matrix U such that L = UΛU T , where Λ is a diagonal matrix with the eigenvalues of L in increasing order on the principal diagonal, and U   R n × n satisfying U T = U 1 is an orthonormal matrix whose columns are the corresponding eigenvectors.
Throughout this paper, the R + notation denotes the set of positive real numbers. A T is defined as the transpose of matrix A. Let · 2 denote the 2-norm of a matrix and N 0 denote the set of non-negative integers.

2.2. Model Setup

We consider a multi-agent system consisting of one leader (marked as 0) and n followers (marked as 1,2, …, n). Assume that the leader transmits its state information to at least one follower through a unidirectional channel, receiving no feedback in return. We use an undirected graph G = ( V , E , A ) to represent the communication topology among the followers. Here, vertex v i ( i = 1 , 2 , , n ) represents the follower i, and the edge ( v i , v j ) E indicates that there exists a communication channel between v i and v j . We assume that the communication topology graph G is connected. We define B = d i a g [ b 1 , b 2 , , b n ] to represent the leader–follower communication links, where b i > 0 if and only if the leader has a communication connection to follower i, and b i = 0 otherwise.
The dynamics of the leader are given by
x ˙ 0 ( t ) = 0 ,
and the dynamics of followers are described by
x ˙ i ( t ) = u i ( t ) , i = 1 , 2 , , n ,
where x i R is the state of follower i, and u i ( t ) is the control input, designed according to the i th follower’s state and local information received from its neighbors.
In this work, we use the periodic event-triggered control method to investigate the consensus of the leader-following system described by (1) and (2). The leader-following MAS is said to have achieved consensus if the state of any follower satisfies lim t ( x i ( t ) x 0 ( 0 ) ) = 0 , for any initial state x i ( 0 ) , i = 1 , 2 , , n . Let h , 2 h , 3 h ,… be the event-checking instants, where h is a fixed positive number, called the event-checking period. At every event-checking instant, the event-triggering condition for each follower is checked. Whenever the condition is satisfied for a given follower, the system samples its state and broadcasts simultaneously the sampled information to its neighbors, and we refer to the instant at which the conditions are satisfied as the event instant. When the neighbors receive the new state information, their controllers are updated under the designed control protocol.

3. Main Results

In this section, we discuss the consensus problem of the considered system. It should be pointed out again that the communication channels between the leader (agent 0) and its followers are unidirectional, and the leader does not respond to its followers’ state information. For simplicity, in the following discussion, a follower’s neighbors are exclusively other followers.
For each follower i ( i = 1 , 2 , , n ) , we define a strictly increasing sequence, t 0 i , t 1 i , t 2 i t k i , to represent the event instants at which the predefined event-triggered conditions are satisfied. These event instants belong to the set { 0 , h , 2 h , } and we set t 0 i = 0 . At t 0 i = 0 , the leader sends its state to the followers who are in communication with it, and all followers sample and send their states to their neighbors. We define the functions x ^ i ( t ) , i = 1 , 2 , , n by
x ^ i ( t ) = x i ( t k i ) , t k i t < t k + 1 i , k = 0 , 1 , 2 , .
The PETC protocol for each follower i is given by
u i ( t ) = δ ( j N i a i j ( x ^ i ( t ) x ^ j ( t ) ) b i ( x ^ i ( t ) x 0 ( 0 ) ) ) , t [ 0 , + )
with the additional controller design parameter δ R + . Under the designed protocol (3), system (2) can be written as
x ˙ ( t ) = δ L x ^ ( t ) δ B ( x ^ ( t ) x 0 ( 0 ) · 1 ) ,
where 1 is a vector with all elements equal to 1, and
x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) , x ^ ( t ) = ( x ^ 1 ( t ) , x ^ ( t ) , , x ^ n ( t ) ) T .
We set vector
ξ ( t ) = ( ξ 1 ( t ) , ξ 2 ( t ) ξ n ( t ) ) T = x ( t ) x 0 ( 0 ) · 1
and
ξ ^ ( t ) = x ^ ( t ) x 0 ( 0 ) · 1 ;
then,
ξ ˙ t = x ˙ t = δ ( L + B ) ( x ^ ( t ) x 0 ( 0 ) · 1 ) .
Let H = L + B , and system (4) can be simplified as
ξ ˙ t = δ H ξ ^ t .
It is obvious that matrix H is symmetric positive definite. Let λ i { H } denote the eigenvalues of the matrix H. From (5), we get
ξ ( t ) ξ ( k h ) = δ H ξ ^ ( k h ) ( t k h ) , t [ k h , ( k + 1 ) h ) .
Since we use the PETC method to investigate the leader-following system, and the event-triggering condition is verified only at event-checking instants k h ( k = 1 , 2 , ) , we use a discrete-time equivalent instead of the continuous-time model. Then, we have
ξ k + 1 h = ξ k h δ h H ξ ^ k h , k = 0 , 1 , 2 , .
The measurement error e i ( t ) of each follower i is defined as the difference between its current state and its latest sampled state, i.e., e i ( t ) = x i ( t ) x ^ i ( t ) . We set e ( t ) = x ( t ) x ^ ( t ) , and then
ξ ( ( k + 1 ) h ) = ξ ( k h ) δ h H ( ξ ( k h ) + e ( k h ) ) .
In previous research, various event-triggering conditions were given for leader-following MASs; see [28,29,30]. In current approaches, consensus is typically achieved by event-triggered strategies that are designed to enforce a monotonic decrease in a Lyapunov function. Here, we use the time-interval-based method proposed in [27] to investigate the consensus of leader-following MASs. This method takes the past system behavior into account, allows a temporary increase in the Lyapunov function, and ensures that a decreasing tendency of this Lyapunov function is still guaranteed.
In this paper, we consider a general form of the triggering condition. The condition can be written as
e i ( k h ) 2 < f i ( k , x k ) ,
where f i is called the trigger function; for each follower i, f i : N 0 × R R and f i > 0 .
For each follower i ( i = 1 , 2 , n ) , we define a strictly increasing sequence T 0 i , T 1 i , = { T m i } m N 0 { k h } k N 0 , with T 0 i = 0 for all followers. These sequences are usually different among different followers. Let k m i h denote the instant corresponding to T m i , i.e., k m i h = T m i . These sequences define the time intervals. For the current time step k h , we define T m i as the most recent element in the relevant sequence that is the beginning of the time interval, i.e.,
T m i = max t ˜ { T m i } m N 0 t ˜ < k h t ˜ .
The interval for evaluating the trigger rule starts at T m i + h (i.e., ( k m i + 1 ) h ) and extends up to the present time step k h or until the subsequent interval begins at T m + 1 i . It is important to distinguish between the time interval for follower i and its actual sequence of trigger instants t k i , as they are generally independent and distinct.
Here, we construct a Lyapunov function
V ( t ) = ξ T ( t ) H ξ ( t ) ,
where ξ = { ξ 1 , ξ 2 , , ξ n } T , H = L + B , L is the Laplacian matrix, and B represents the leader-following link matrix. By construction, V ( t ) = 0 if and only if H ξ ( t ) = 0 , and H ξ ( t ) = 0 if and only if lim t x i ( t ) = x 0 ( 0 ) , i { 1 , 2 , , n } . Consequently, if V 0 , then all followers’ states converge to the leader’s state.
In the following, we present a time interval-based trigger scheme for the leader-following MAS (4). We take into account the previous state information and calculate an evolutionary upper bound for the Lyapunov function.
Theorem 1.
Consider the leader-following MAS (4). Let the matrix P ξ = ( 2 a ) δ h H 2 δ 2 h 2 H 3 , where a ( 0 ,   2 ) , and σ is the smallest eigenvalue of P ξ . Let the trigger rule for each follower i ( i = 1 , 2 , n ) be given by
e i ( k h ) 2 < f i ( k , x ( k h ) ) + k ¯ = k m i + 1 k 1 ( 1 σ ) k k ¯ ( f i ( k ¯ , x ( k ¯ h ) ) e i ( k ¯ h ) 2 ) ,
for k [ k m i + 1 , k m + 1 i ] . For each k N 0 , if δ h < 2 a λ n { H } , then the following upper bound holds:
V ( ( k + 1 ) h ) ( 1 σ ) k V ( 0 ) + P e i = 1 n k ¯ = 0 k 1 ( 1 σ ) k k ¯ 1 f i ( k ¯ , x ( k ¯ h ) ) ,
where V is defined by (10), and λ n { H } is the maximum eigenvalue of H.
Proof. 
Consider a Lyapunov function V ( t ) = ξ T H ξ . From (8), we obtain
V ( ( k + 1 ) h ) = ξ ( ( k + 1 ) h ) T H ξ ( ( k + 1 ) h ) = ξ ( k h ) T H ξ ( k h ) ξ ( k h ) T k h H 2 ξ ( k h ) ξ ( k h ) T k h H 2 e ( k h ) ξ ( k h ) T k h H 2 ξ ( k h ) + ξ ( k h ) T k 2 h 2 H 3 ξ ( k h ) + ξ ( k h ) T k 2 h 2 H 3 e ( k h ) e ( k h ) T k h H 2 ξ ( k h ) + e ( k h ) T k 2 h 2 H 3 ξ ( k h ) + e ( k h ) T k 2 h 2 H 3 e ( k h ) = V ( k h ) + ξ ( k h ) T [ δ 2 h 2 H 2 2 δ h H 2 ] ξ ( k h ) + e ( k h ) T [ δ 2 h 2 H 3 ] e ( k h ) 2 ξ ( k h ) T [ δ h H 2 δ 2 h 2 H 3 ] e ( k h ) .
Applying Young’s inequality to (13), we get
2 ξ ( k h ) T [ k h H 2 k 2 h 2 H 3 ] e ( k h ) a ξ ( k h ) T [ k h H 2 k 2 h 2 H 3 ] ξ ( k h ) + 1 a e ( k h ) T [ k h H 2 k 2 h 2 H 3 ] e ( k h ) .
Since H is a positive definite matrix, we have
V ( ( k + 1 ) h ) V ( k h ) ξ T ( k h ) [ ( 2 a ) δ h H 2 δ 2 h 2 H 3 ] ξ ( k h ) + e ( k h ) T [ 1 a δ h H 2 + ( 1 1 a ) δ h 2 H 3 ] e ( k h ) .
Define the matrices P e = 1 a δ h H 2 + ( 1 1 a ) δ h 2 H 3 . Then, inequality (14) can be simplified as follows:
V ( ( k + 1 ) h ) V ( k h ) ξ T ( k h ) H P ξ ξ ( k h ) + e ( k h ) T P e e ( k h ) .
Furthermore, we get the upper bound of the Lyapunov function (the proof process can be seen in Remark 1):
V ( ( k + 1 ) h ) ( 1 σ ) V ( k h ) + e k 2 P e .
Then,
V ( k h ) ( 1 σ ) k V ( 0 ) + | | P e | | n i = 1 k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 e i ( k ¯ h ) 2 .
Suppose that the current step k h belongs to the m-th time interval, i.e., k h [ T m i ,   T m + 1 i ) . Then, we have
k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 e i ( k ¯ h ) 2 = ( 1 σ ) k 1 e i ( 0 ) 2 + m 1 j = 0 m 1 k ¯ = k j i + 1 ( 1 σ ) k k ¯ 1 e i ( k ¯ h ) 2 + k 1 k ¯ = k m i + 1 1 σ k k ¯ 1 e i k ¯ h 2 .
From (11), for each time step in the m-th time interval, we have
f i ( k , x ( k h ) ) e i k h 2 + k 1 k ¯ = k m i + 1 1 σ k k ¯ f i ( k ¯ , x ( k ¯ h ) ) e i k ¯ h 2 > 0 .
For the j-th time interval [ T j i ,   T j + 1 i ) , j = 0 , 1 , m 1 , the time steps contain T j i + h , T j i + 2 h , , T j + 1 i , and we get
k j + 1 i k ¯ = k j i + 1 ( 1 σ ) k j + 1 i k ¯ e i ( k ¯ h ) 2 < k j + 1 i k ¯ = k j i + 1 ( 1 σ ) k j + 1 i k ¯ f i ( k ¯ , x ( k ¯ h ) ) .
Note that e i ( 0 ) = 0 for all followers at t = 0 . We have
k j 1 i k ¯ = k j i + 1 ( 1 σ ) k k ¯ 1 ( e i ( k ¯ h ) 2 f i ( k ¯ , x ( k ¯ h ) ) ) < 0 ,
k 1 k ¯ = k m i + 1 1 σ k k ¯ 1 e i k ¯ h 2 = 1 σ k 1 k ¯ = k m i + 1 1 σ ( k 1 ) 1 k ¯ e i k ¯ h 2 < 1 σ k 1 k ¯ = k m i + 1 1 σ ( k 1 ) 1 k ¯ f i ( k ¯ , x ( k ¯ h ) ) = k 1 k ¯ = k m i + 1 1 σ k 1 k ¯ f i ( k ¯ , x ( k ¯ h ) ) .
It follows from the above that
k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 ( e i ( k ¯ h ) 2 f i ( k ¯ , x ( k ¯ h ) ) ) < 0 .
Hence,
V ( ( k + 1 ) h ) ( 1 σ ) k V ( 0 ) + P e k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 f i ( k ¯ , x ( k ¯ h ) ) .
Remark 1.
Since P ξ = ( 2 a ) δ h H 2 δ 2 h 2 H 3 , the eigenvalues of P ξ are
λ j { P ξ } = ( ( 2 a ) δ h λ i { H } ) δ h λ i { H } .
Suppose that λ j { P ξ } ( j = 1 , 2 , n ) is a monotonically increasing sequence. Since a ( 0 , 2 ) and δ h < 2 a λ n { H } , we obtain that P ξ is a positive definite matrix. Then, λ 1 P ξ is the smallest eigenvalue of P ξ , i.e., σ = λ 1 { P ξ } . While both sequences λ i H and λ j P ξ are increasing, their orders do not correspond one-to-one. A mapping ϕ is now defined to establish the relationship between j and i, i.e., ϕ ( j ) = i . Using spectral decomposition, we can get H = U Λ U T , where Λ = d i a g [ λ 1 { H } , λ 2 { H } , , λ n { H } ] . Let U = Q Φ , where Φ satisfies that Φ 1 = Φ T is used to permute the eigenvectors between H and P ξ . Consequently, we obtain
H = Q Φ Λ Φ T Q T , P ξ = Q D Q T ,
where D = Φ [ ( 2 a ) δ h Λ 2 δ 2 h 3 Λ 3 ] Φ T . Let y = Q T ξ ( k h ) , and we have
ξ ( k h ) T H P ξ ξ ( k h ) j = 1 n λ ϕ ( j ) { H } λ 1 { P ξ } y j 2 = λ 1 { P ξ } V ( k h ) = σ V ( k h ) .
Using the Hölder inequality, we have
e k T P e e k e k T P e e k e k P e e k = e k 2 P e .
It follows from (21) and (22) that (16) can be derived from (15).
Theorem 2.
Let Theorem 1 hold. For all followers i { 1 , 2 , , n } , k 0 , if 0 f i ( k , x ( k h ) ) < and lim k f i ( k , x ( k h ) ) = 0 ; then, all followers’ states converge to the leader’s state.
Proof. 
Let f ( k , x ( k h ) ) = i = 1 n f i ( k , x ( k h ) ) ; then, 0 f ( k , x ( k h ) ) < , and we have
k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 f i ( k ¯ , x ( k ¯ h ) ) = ( 1 σ ) k 1 k 1 k ¯ = 0 f i ( k ¯ , x ( k ¯ h ) ) ( 1 σ ) k ¯ .
According to Kronecker’s Lemma, we have
k 1 k ¯ = 0 ( 1 σ ) k k ¯ 1 f i ( k ¯ , x ( k ¯ h ) ) 0 .
Then, lim k V ( k h ) = 0 . Therefore, the leader-following multi-agent system can achieve consensus, i.e., the states of all followers converge to the leader’s state. □
The analyses in the above are all based on abstract functions f i . In the following, we provide a specific triggering function to verify the validity of the method. For the leader-following MAS (4), we define
i = 1 n V i ( k h ) = V ( k h ) = 1 2 i = 1 n j N i a i j ( x ^ i ( k h ) x ^ j ( k h ) ) + b i ( x ^ i ( k h ) x 0 ( 0 ) ) .
The upper bound of V ( t ) can still be given by (12). We define the trigger function as follows:
f i ( k , x k ) = σ η P e V i ( k h ) ,
where 0 η < σ . Hence, it is true that V ( k h ) < ( 1 η ) k V ( 0 ) . From (23), we can obtain lim k f i ( k , x ( k h ) ) = 0 .
Corollary 1.
Let Theorem 1 hold and let the trigger function be (24); then, the state of each follower i converges to the state of the leader, and the upper bound can be written as
V ( kh ) < ( 1 η ) k V ( 0 ) ,
for all k N 0 .
Proof. 
By (12) and (24), we have
V ( k h ) < ( 1 σ ) k V ( 0 ) + P e k ¯ = 0 k 1 ( 1 σ ) k k ¯ 1 i = 1 n σ η P e V i ( k ¯ h ) = ( 1 σ ) k V ( 0 ) + k ¯ = 0 k 1 ( 1 σ ) k k ¯ 1 ( σ η ) V ( k ¯ h ) .
Let α = 1 σ , β = σ η ; then, (26) can be written as
V ( k h ) < α k V ( 0 ) + k ¯ = 0 k 1 α k k ¯ 1 β V ( k ¯ h ) .
When β = 0 , the inequality V ( k h ) < α k V ( 0 ) holds. Now, we only discuss the case of β 0 . When k = 1 , it is obvious that V ( h ) < α V ( 0 ) . Assume that inequality (27) holds when k = l ( l 1 ) ; then, we obtain
V ( ( l + 1 ) h ) < α l + 1 V ( 0 ) + k ¯ = 0 l α l k ¯ β V ( k ¯ h ) < α l + 1 V ( 0 ) + k ¯ = 0 l α l k ¯ β ( α + β ) k ¯ V ( 0 ) = ( α + β ) l + 1 V ( 0 ) .
By mathematical induction, for all k N 0 , inequality (27) holds. □

4. Simulations

In this section, we give some simulation examples to demonstrate the effectiveness of the control strategy. We select two types of time intervals: an event-related time interval and a fixed-length time interval. The event-related time interval refers to the current time interval ending at the moment at which the event occurs, with the new time interval beginning at the next check moment. The fixed-length time interval refers to the time interval based on a given number of cycles ε .
We consider a leader-following multi-agent system containing one leader and six followers, where 0 represents the leader and 1, 2, 3, 4, 5, and 6 represent the followers.
Assume that the topology between followers is connected and undirected, and the communication weight is 0.5. The communication topology is given in Figure 1. Then, we can get the weighted adjacency matrix
A = 0 0.5 0 0.5 0 0 0.5 0 0.5 0 0.5 0 0 0.5 0 0.5 0 0.5 0.5 0 0.5 0 0.5 0 0 0.5 0 0.5 0 0.5 0 0 0.5 0 0.5 0 .
The matrix B, which represents the connection between the leader and followers, is expressed as
B = 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 .
We set the control gain parameter δ = 0.8 , and other parameters are as follows: h = 0.1 ,   a = 1.2 and η = 0.04 . Suppose that the initial state at the beginning of the simulation is randomly determined within the specified range, where x i ( 0 ) [ 2 , 2 ] , i = 1 , 2 , , n , x 0 [ 2 , 2 ] .
The X-axis in Figure 2 represents the simulation time, and the Y-axis represents the state values of the leader and followers. We can see from Figure 2 that, for the MAS with an event-related time interval or fixed-length time interval, the states of all followers eventually converge uniformly to the state of the leader.
Next, we compare the number of triggers of systems with event-related time intervals and fixed-length time intervals (here, we choose three fixed lengths, ε = 3, ε = 5, and ε = 10) and a standard PETC method without time intervals, i.e., ε = 1. We conduct 100 simulation experiments and statistically analyze the distribution of the trigger times for each case in each experiment, resulting in the following outcomes.
For the five cases described above, multiple simulation experiments are conducted and the number of triggers in each experiment is recorded. The first five sub-figures in Figure 3 show the distribution of the number of triggers in multiple experiments, which are grouped by certain intervals. The sixth sub-figure in Figure 3 shows the specific number of triggers from a randomly selected experiment. The average update rate of MASs without a time interval is the highest, reaching 38.25%. After adding the event-related time interval, the update rate decreases to 30.83%. It is obvious that the frequency of state information updates decreases, which will lead to a reduction in communication costs. When the fixed-length time interval strategy is adopted, the system performance is related to the value of ε . When ε = 3, the update rate is 26.33%, which is slightly improved compared with the event-related time interval. When ε = 5, the update rate further drops to 21.83%, and, when ε increases to 10, the update rate is only 15.08%. These data fully demonstrate that the fixed-length time interval strategy can effectively reduce the update frequency, and the extent of reduction in the update rate is related to the length of the time interval.

5. Conclusions

This paper introduces a periodic event-triggered control strategy relying on previous information for leader-following multi-agent systems. By using the abstract trigger function, it is theoretically deduced that the consensus of the leader-following system is achievable under this trigger strategy. In the simulations, we mainly simulated two types of time intervals: an event-related time interval and a fixed-length time interval. The simulation results show that the strategy based on event-related time intervals reduces the update frequency of the controllers, while the performance of systems with fixed-length time intervals is related to the fixed length and the effect is better than that of event-related time intervals. This study provides an efficient and feasible triggering strategy for the consensus control of leader-following MASs. In future research, we may also attempt to apply this method based on previous information to systems with switching topologies, multiple leaders, or directed graphs. Furthermore, we can also apply the proposed framework to address practical issues such as packet loss, communication delays, and measurement noise.

Author Contributions

Conceptualization, H.W. and A.W.; methodology, H.W.; validation, H.W., A.W., and M.W.; formal analysis, H.W.; data curation, H.W.; writing—original draft preparation, H.W.; writing—review and editing, A.W. and M.W.; visualization, H.W.; supervision, A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors have no conflicts of interest to disclose.

References

  1. Du, Z.; Du, C.; Yang, R.; Fang, T.; Yu, J.; Zheng, Y. Distributed Cooperative Formation and Obstacle Avoidance for Multiple UAVs by Consensus and Control Barrier Functions. In Proceedings of the 37th Chinese Control and Decision Conference (CCDC), Xiamen, China, 16–19 May 2025. [Google Scholar]
  2. Guerrero, J.A.; Romero, G.; Olivares, D.; Romero-Cruz, L.M. Forced Multi-Agent Bipartite Consensus Control: Application to Quadrotor Formation Flying. IEEE Access 2024, 12, 163652–163670. [Google Scholar] [CrossRef]
  3. Wang, Y.; Cao, J.; Kashkynbayev, A. Multi-Agent Bifurcation Consensus-Based Multi-Layer UAVs Formation Keeping Control and Its Visual Simulation. IEEE Trans. Circuits Syst. I 2023, 70, 3222–3233. [Google Scholar] [CrossRef]
  4. Wang, X.; Pang, N.; Xu, Y.; Huang, T.; Kurths, J. On State-Constrained Containment Control for Nonlinear Multiagent Systems Using Event-Triggered Input. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 2519–2538. [Google Scholar] [CrossRef]
  5. Wang, Z.; Mu, C.; Hu, S.; Chu, C.; Li, X. Modelling the Dynamics of Regret Minimization in Large Agent Populations: A Master Equation Approach. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Vienna, Austria, 23–29 July 2022; pp. 536–544. [Google Scholar]
  6. Wang, X.; Guang, W.; Huang, T.; Kurths, J. Optimized Adaptive Finite-Time Consensus Control for Stochastic Nonlinear Multiagent Systems With Non-Affine Nonlinear Faults. IEEE Trans. Autom. Sci. Eng. 2024, 21, 5012–5023. [Google Scholar] [CrossRef]
  7. Gao, Y.; Wang, L. Asynchronous consensus of continuous-time second-order multi-agent systems in a sampled-data setting. IEEE Trans. Autom. Control 2011, 56, 1227–1231. [Google Scholar] [CrossRef]
  8. Wang, X.; Zhou, Z. Projection-Based Consensus for Continuous-Time Multi-Agent Systems with State Constraints. In Proceedings of the 2015 IEEE 54th Annual Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 1060–1064. [Google Scholar]
  9. Xiao, F.; Wang, L. Asynchronous Consensus in Continuous-Time Multi-Agent Systems With Switching Topology and Time-Varying Delays. arXiv 2006, arXiv:0611932v1. [Google Scholar] [CrossRef]
  10. Wang, Z.; You, K.; Xu, J.; Zhang, H. Consensus Design for Continuous-Time Multi-Agent Systems With Communication Delay. J. Syst. Sci. Complex. 2014, 27, 701–711. [Google Scholar] [CrossRef]
  11. Feng, X.-L.; Yang, X.-L. New Protocols on Group Consensus of Continuous-time Multi-agent Systems. In Proceedings of the 2017 13th International Conference on Computational Intelligence and Security, Hong Kong, China, 15–18 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 77–81. [Google Scholar]
  12. Yang, Q.-Q.; Li, J.; Feng, X.; Wu, S.; Gao, F. Event-triggered Consensus for Second-order Multi-agent Systems via Asynchronous Periodic Sampling Control Approach. Int. J. Control Autom. Syst. 2020, 18, 1–13. [Google Scholar] [CrossRef]
  13. Li, S.; Chen, Y.; Liu, P.X. Double Event-triggered Leader-following Consensus and Fault Detection for Lipschitz Nonlinear Multi-agent Systems via Periodic Sampling Strategy. Nonlinear Dyn. 2023, 111, 8293–8311. [Google Scholar] [CrossRef]
  14. Donganont, M.; Intawichai, S.; Phongchan, S. Finite-Time Scaled Consensus in Hybrid Multi-Agent Systems via Conjugate Gradient Methods. Eur. J. Pure Appl. Math. 2025, 18, 6163. [Google Scholar] [CrossRef]
  15. Donganont, M.; Donganont, S.; Zhang, H. Pulse-modulated Control for Scaled Consensus of Edge Dynamics in Hybrid Multi-agent Systems. Int. J. Control Autom. Syst. 2025, 23, 2503–2513. [Google Scholar] [CrossRef]
  16. Wang, X.; Sun, J.; Wang, G.; Chen, J. Event-triggered Consensus Control of Heterogeneous Multi-agent Systems: Model- and Data-based Analysis. arXiv 2022, arXiv:2208.00867v1. [Google Scholar] [CrossRef]
  17. Sun, J.; Wang, Z.; Fan, X. Periodic event-triggered consensus control for multi-agent systems with switching jointly connected topologies. IET Control Theory Appl. 2020, 14, 3282–3290. [Google Scholar] [CrossRef]
  18. Liu, K.; Ji, Z.; Zhang, X. Periodic event-triggered consensus of multi-agent systems under directed topology. Neurocomputing 2020, 385, 33–41. [Google Scholar] [CrossRef]
  19. Yang, Y.; Shen, B.; Han, Q.-L. Dynamic Event-Triggered Scaled Consensus of Multi-Agent Systems in Reliable and Unreliable Networks. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 1124–1136. [Google Scholar] [CrossRef]
  20. Liu, K.; Ji, Z. Dynamic event-triggered consensus of multi-agent systems under directed topology. Int. J. Robust Nonlinear Control 2023, 33, 4328–4344. [Google Scholar] [CrossRef]
  21. Li, B.; Zhao, L.; Wen, S. Periodic Event-Triggered Consensus of Stochastic Multi-Agent Systems Under Switching Topology. Artif. Intell. Sci. Eng. 2025, 1, 147–156. [Google Scholar] [CrossRef]
  22. Espitia, N. Observer-based event-triggered boundary control of a linear 2 × 2 hyperbolic systems. Syst. Control Lett. 2020, 138, 104668. [Google Scholar] [CrossRef]
  23. Wang, A. Event-based consensus control for single-integrator networks with communication time delays. Neurocomputing 2016, 173, 1715–1719. [Google Scholar] [CrossRef]
  24. Wang, A.; Zhao, Y. Event-triggered consensus control for leader-following multi-agent systems with time-varying delays. J. Frankl. Inst. 2016, 353, 4754–4771. [Google Scholar] [CrossRef]
  25. Linsenmayer, S.; Dimarogonas, D.V.; Allgöwer, F. Periodic Event-Triggered Control for Networked Control Systems Based on Non-Monotonic Lyapunov Functions. Automatica 2019, 106, 35–46. [Google Scholar] [CrossRef]
  26. Mousavi, S.H.; Ghodrat, M.; Marquez, H.J. Integral-Based Event-Triggered Control Scheme for a General Class of Non-Linear Systems. IET Control Theory Appl. 2015, 9, 1982–1988. [Google Scholar] [CrossRef]
  27. Seidel, M.; Hertneck, M.; Yu, P.; Linsenmayer, S.; Dimarogonas, D.V.; Allgöwer, F. A window-based periodic event-triggered consensus scheme for multiagent systems. IEEE Trans. Control Netw. Syst. 2024, 11, 414–426. [Google Scholar] [CrossRef]
  28. Du, S.L.; Liu, T.; Ho, D.W.C. Dynamic event-triggered control for leader-following consensus of multiagent systems. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 3243–3251. [Google Scholar] [CrossRef]
  29. Xu, W.; Ho, D.W.C.; Li, L.; Cao, J. Event-triggered schemes on leader-following consensus of general linear multiagent systems under different topologies. IEEE Trans. Cybern. 2017, 47, 212–223. [Google Scholar] [CrossRef]
  30. Liu, D.; Yang, G.H. A dynamic event-triggered control approach to leader-following consensus for linear multiagent systems. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 6271–6279. [Google Scholar] [CrossRef]
Figure 1. Communication topology.
Figure 1. Communication topology.
Symmetry 18 00271 g001
Figure 2. State convergence under various event-triggered mechanisms.
Figure 2. State convergence under various event-triggered mechanisms.
Symmetry 18 00271 g002
Figure 3. Distribution of trigger counts over 100 experiments.
Figure 3. Distribution of trigger counts over 100 experiments.
Symmetry 18 00271 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Wang, A.; Wu, M. Periodic Event-Triggered Consensus Relying on Previous Information in Leader-Following Multi-Agent Systems. Symmetry 2026, 18, 271. https://doi.org/10.3390/sym18020271

AMA Style

Wang H, Wang A, Wu M. Periodic Event-Triggered Consensus Relying on Previous Information in Leader-Following Multi-Agent Systems. Symmetry. 2026; 18(2):271. https://doi.org/10.3390/sym18020271

Chicago/Turabian Style

Wang, Huanzhen, Aiping Wang, and Miaomiao Wu. 2026. "Periodic Event-Triggered Consensus Relying on Previous Information in Leader-Following Multi-Agent Systems" Symmetry 18, no. 2: 271. https://doi.org/10.3390/sym18020271

APA Style

Wang, H., Wang, A., & Wu, M. (2026). Periodic Event-Triggered Consensus Relying on Previous Information in Leader-Following Multi-Agent Systems. Symmetry, 18(2), 271. https://doi.org/10.3390/sym18020271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop