1. Introduction
Distributed multi-agent systems (MASs) are composed of multiple autonomous agents that interact with their neighbors. The consensus problem, which requires the states of all agents to eventually converge to a common value, is solved by designing control strategies that rely on the states of their neighbors. This issue is relevant to many practical applications [
1,
2,
3].
In recent years, control methods for nonlinear multi-agent systems have been extensively studied, such as adaptive back-stepping control based on neural network approximation [
4], group learning dynamic modeling based on the governing equations [
5], and finite-time optimal fault-tolerant control combined with reinforcement learning [
6]. However, nonlinear methods often come with challenges, such as high computational complexity, difficult parameter tuning, and complex theoretical analysis, especially in scenarios with large system scales or high real-time requirements, where they may be limited. In contrast, linear system models have advantages such as clear structures, complete analysis tools, and broad applications. Therefore, they have received extensive attention from researchers.
In early research, continuous-time strategies were mainly used [
7,
8,
9,
10,
11]. These methods require agents to continuously communicate and update control, which places significant pressure on the communication and computational resources of MASs. Especially in large-scale networks, such update strategies not only lead to unnecessary energy consumption but are also difficult to implement in practical applications. To reduce the communication costs and enhance the practical availability of multi-agent systems, some researchers have adopted an update strategy consisting of periodic sampling [
12,
13]. Under this periodic sampling strategy, each agent updates its state according to the prescribed period. In the context of periodic sampling, relevant research has achieved progress. Based on periodic sampling, some researchers have proposed periodic event-triggered control (PETC) strategies. Unlike periodic sampling control strategies, this control strategy only updates the agents’ states when specific triggering conditions are met. Additionally, some researchers have discussed the finite-time scaled consensus problem for hybrid multi-agent systems comprising both continuous-time and discrete-time dynamic agents. For example, in [
14], based on the conjugate gradient method, the problem of finite-time proportional consensus for hybrid multi-agent systems was studied, while, in [
15], using pulse modulation control, the problem of time-varying proportional synchronization for edge systems with mixed dynamics was explored.
The PETC method has been increasingly applied in consensus regarding multi-agent systems [
16,
17,
18,
19,
20,
21,
22,
23,
24]. For example, a PETC algorithm that discretizes continuous-time MASs based on a zero-order hold to ensure consensus among agent states was proposed for multi-agent systems with an undirected graph topology [
20]. A PETC strategy for consensus tracking control in stochastic MASs with a switching topology was proposed in [
22]. In [
22], research on consensus effectively reduced the communication burden during the tracking process by combining event-triggering mechanisms with distributed observers. In various works, the leader-following MAS has been widely studied. In leader-following systems, it is often required that the states of all followers eventually converge to that of the leader, enabling the system to complete collaborative tasks such as formation control and target tracking. However, the existence of leaders also imposes higher demands regarding the efficiency of cooperation and resource consumption among followers. In the existing literature, the results regarding PETC consensus are mostly based on standard Lyapunov theory. Under these triggering strategies, the Lyapunov function is guaranteed to decrease monotonically [
23]. To ensure this property, the design of trigger conditions is often overly conservative. For example, in [
24], a consensus control strategy for leader-following multi-agent systems with time-varying delays was proposed. The consensus was achieved by trigger rules, ensuring a monotone decrease in the Lyapunov function. However, this leads to an excessively high communication cost. To reduce the update frequency, some researchers have proposed various improvement strategies, relaxing the monotonically decreasing requirements for Lyapunov functions. For the general event-triggered control setup, a periodic event-triggered control strategy based on non-monotonic Lyapunov functions was proposed in [
25], which allowed the Lyapunov function to temporarily increase under the premise of ensuring a decreasing trend. Similar ideas were also proposed in [
26], which proves that this method can effectively reduce the amount of data transmission. Motivated by [
25,
26], reference [
27] proposed a novel window-based method for multi-agent systems without leaders. This method is achieved by considering the previous system behavior and allows for the utilization of the remaining conservatism in previous time steps.
In this paper, we extend the window-based non-monotonic Lyapunov method that was introduced in [
27] to leader-following systems. For our considered systems, we propose a periodic event-triggered control protocol. For each follower, the event-triggering conditions are designed by adding the previous state information of its neighbors. We provide an upper bound for the Lyapunov function and allow an increase in the Lyapunov function within a local range as long as a decreasing tendency is still guaranteed. Subsequently, we provide a rigorous convergence analysis and use some simulation examples to verify the effectiveness of the method.
The remainder of this paper is organized as follows:
Section 2 presents the problem formulation and necessary mathematical preliminaries;
Section 3 details the design of a time interval-based event-triggered control protocol and provides the consensus analysis;
Section 4 validates the proposed method through numerical simulations; and
Section 5 concludes the paper with potential future research directions.
2. Problem Formulation
2.1. Preliminaries
In multi-agent systems, communication among agents is essential for agents to reach consensus. A multi-agent system can be modeled as a graph topology of the network. Agents are represented as vertices in the graph, and edges denote communication between agents. Here, for convenience, we briefly define some notations and basic concepts that will be used in this paper.
Let be a finite undirected weighted graph without self-loops, where is the vertex set of graph G, is the edge set, and A is the weighted adjacency matrix. For an undirected graph, each edge connects two vertices without a specified direction; then, the edges and are considered as the same edge. Two vertices and associated with the same edge are called adjacent vertices or neighbors. The set of neighbors of vertex is denoted by . A graph G is said to be connected if there is at least one path between every pair of vertices in G, i.e., a sequence of edges connecting them. The adjacency matrix is a non-negative symmetric matrix with , if and only if , and otherwise. Each entry of A quantifies the strength or characteristics of the relationship between the vertices and . The degree of vertex is defined as the sum of all adjacent edges’ weights of , i.e., . We define the degree matrix as the diagonal matrix with each vertex’s degree on the diagonal. The Laplacian matrix of undirected graph is defined by , where and . The Laplacian matrix L is symmetric and positive semi-definite and therefore it has n real eigenvalues ordered as . If graph G is connected, then the Laplacian matrix L has a single zero eigenvalue with multiplicity of one, and is the smallest positive eigenvalue of L. Note that every row and column sum of L is zero; it follows that the vector of all ones is an eigenvector of L associated with the eigenvalue . For any real symmetric matrix like L, L can be orthonormally diagonalized, i.e., there exists an orthonormal matrix U such that , where Λ is a diagonal matrix with the eigenvalues of L in increasing order on the principal diagonal, and U satisfying is an orthonormal matrix whose columns are the corresponding eigenvectors.
Throughout this paper, the notation denotes the set of positive real numbers. is defined as the transpose of matrix A. Let denote the 2-norm of a matrix and denote the set of non-negative integers.
2.2. Model Setup
We consider a multi-agent system consisting of one leader (marked as 0) and n followers (marked as 1,2, …, n). Assume that the leader transmits its state information to at least one follower through a unidirectional channel, receiving no feedback in return. We use an undirected graph to represent the communication topology among the followers. Here, vertex represents the follower i, and the edge indicates that there exists a communication channel between and . We assume that the communication topology graph G is connected. We define to represent the leader–follower communication links, where if and only if the leader has a communication connection to follower i, and otherwise.
The dynamics of the leader are given by
and the dynamics of followers are described by
where
is the state of follower
i, and
is the control input, designed according to the
i th follower’s state and local information received from its neighbors.
In this work, we use the periodic event-triggered control method to investigate the consensus of the leader-following system described by (1) and (2). The leader-following MAS is said to have achieved consensus if the state of any follower satisfies for any initial state . Let ,… be the event-checking instants, where h is a fixed positive number, called the event-checking period. At every event-checking instant, the event-triggering condition for each follower is checked. Whenever the condition is satisfied for a given follower, the system samples its state and broadcasts simultaneously the sampled information to its neighbors, and we refer to the instant at which the conditions are satisfied as the event instant. When the neighbors receive the new state information, their controllers are updated under the designed control protocol.
3. Main Results
In this section, we discuss the consensus problem of the considered system. It should be pointed out again that the communication channels between the leader (agent 0) and its followers are unidirectional, and the leader does not respond to its followers’ state information. For simplicity, in the following discussion, a follower’s neighbors are exclusively other followers.
For each follower
, we define a strictly increasing sequence,
,
,
, to represent the event instants at which the predefined event-triggered conditions are satisfied. These event instants belong to the set
and we set
. At
, the leader sends its state to the followers who are in communication with it, and all followers sample and send their states to their neighbors. We define the functions
,
by
The PETC protocol for each follower
i is given by
with the additional controller design parameter
. Under the designed protocol (3), system (2) can be written as
where
is a vector with all elements equal to 1, and
Let
, and system (4) can be simplified as
It is obvious that matrix H is symmetric positive definite. Let
denote the eigenvalues of the matrix
H. From (5), we get
Since we use the PETC method to investigate the leader-following system, and the event-triggering condition is verified only at event-checking instants
, we use a discrete-time equivalent instead of the continuous-time model. Then, we have
The measurement error
of each follower
i is defined as the difference between its current state and its latest sampled state, i.e.,
. We set
, and then
In previous research, various event-triggering conditions were given for leader-following MASs; see [
28,
29,
30]. In current approaches, consensus is typically achieved by event-triggered strategies that are designed to enforce a monotonic decrease in a Lyapunov function. Here, we use the time-interval-based method proposed in [
27] to investigate the consensus of leader-following MASs. This method takes the past system behavior into account, allows a temporary increase in the Lyapunov function, and ensures that a decreasing tendency of this Lyapunov function is still guaranteed.
In this paper, we consider a general form of the triggering condition. The condition can be written as
where
is called the trigger function; for each follower
i,
and
.
For each follower
, we define a strictly increasing sequence
, with
for all followers. These sequences are usually different among different followers. Let
denote the instant corresponding to
, i.e.,
. These sequences define the time intervals. For the current time step
, we define
as the most recent element in the relevant sequence that is the beginning of the time interval, i.e.,
The interval for evaluating the trigger rule starts at
(i.e.,
) and extends up to the present time step
or until the subsequent interval begins at
. It is important to distinguish between the time interval for follower
i and its actual sequence of trigger instants
, as they are generally independent and distinct.
Here, we construct a Lyapunov function
where
,
,
L is the Laplacian matrix, and
B represents the leader-following link matrix. By construction,
if and only if
, and
if and only if
. Consequently, if
, then all followers’ states converge to the leader’s state.
In the following, we present a time interval-based trigger scheme for the leader-following MAS (4). We take into account the previous state information and calculate an evolutionary upper bound for the Lyapunov function.
Theorem 1. Consider the leader-following MAS (4). Let the matrix , where , and σ is the smallest eigenvalue of . Let the trigger rule for each follower be given byfor . For each , if , then the following upper bound holds:where V is defined by (10), and is the maximum eigenvalue of H. Proof. Consider a Lyapunov function
. From (8), we obtain
Applying Young’s inequality to (13), we get
Since
H is a positive definite matrix, we have
Define the matrices
Then, inequality (14) can be simplified as follows:
Furthermore, we get the upper bound of the Lyapunov function (the proof process can be seen in Remark 1):
Then,
Suppose that the current step
belongs to the
m-th time interval, i.e.,
. Then, we have
From (11), for each time step in the
m-th time interval, we have
For the
j-th time interval
, the time steps contain
, and we get
Note that
for all followers at
. We have
It follows from the above that
Hence,
□
Remark 1. Since , the eigenvalues of areSuppose that is a monotonically increasing sequence. Since and , we obtain that is a positive definite matrix. Then, is the smallest eigenvalue of , i.e., . While both sequences and are increasing, their orders do not correspond one-to-one. A mapping ϕ is now defined to establish the relationship between j and i, i.e., . Using spectral decomposition, we can get , where . Let U , where satisfies that is used to permute the eigenvectors between H and . Consequently, we obtainwhere . Let , and we have Using the Hölder inequality, we haveIt follows from (21) and (22) that (16) can be derived from (15). Theorem 2. Let Theorem 1 hold. For all followers , , if and ; then, all followers’ states converge to the leader’s state.
Proof. Let
; then,
, and we have
According to Kronecker’s Lemma, we have
Then, . Therefore, the leader-following multi-agent system can achieve consensus, i.e., the states of all followers converge to the leader’s state. □
The analyses in the above are all based on abstract functions
. In the following, we provide a specific triggering function to verify the validity of the method. For the leader-following MAS (4), we define
The upper bound of
can still be given by (12). We define the trigger function as follows:
where
. Hence, it is true that
From (23), we can obtain
.
Corollary 1. Let Theorem 1 hold and let the trigger function be (24); then, the state of each follower i converges to the state of the leader, and the upper bound can be written asfor all . Proof. By (12) and (24), we have
Let
; then, (26) can be written as
When
, the inequality
holds. Now, we only discuss the case of
. When
, it is obvious that
. Assume that inequality (27) holds when
; then, we obtain
By mathematical induction, for all
, inequality (27) holds. □
4. Simulations
In this section, we give some simulation examples to demonstrate the effectiveness of the control strategy. We select two types of time intervals: an event-related time interval and a fixed-length time interval. The event-related time interval refers to the current time interval ending at the moment at which the event occurs, with the new time interval beginning at the next check moment. The fixed-length time interval refers to the time interval based on a given number of cycles .
We consider a leader-following multi-agent system containing one leader and six followers, where 0 represents the leader and 1, 2, 3, 4, 5, and 6 represent the followers.
Assume that the topology between followers is connected and undirected, and the communication weight is 0.5. The communication topology is given in
Figure 1. Then, we can get the weighted adjacency matrix
The matrix B, which represents the connection between the leader and followers, is expressed as
We set the control gain parameter
, and other parameters are as follows:
Suppose that the initial state at the beginning of the simulation is randomly determined within the specified range, where
,
.
The X-axis in
Figure 2 represents the simulation time, and the Y-axis represents the state values of the leader and followers. We can see from
Figure 2 that, for the MAS with an event-related time interval or fixed-length time interval, the states of all followers eventually converge uniformly to the state of the leader.
Next, we compare the number of triggers of systems with event-related time intervals and fixed-length time intervals (here, we choose three fixed lengths, = 3, = 5, and = 10) and a standard PETC method without time intervals, i.e., = 1. We conduct 100 simulation experiments and statistically analyze the distribution of the trigger times for each case in each experiment, resulting in the following outcomes.
For the five cases described above, multiple simulation experiments are conducted and the number of triggers in each experiment is recorded. The first five sub-figures in
Figure 3 show the distribution of the number of triggers in multiple experiments, which are grouped by certain intervals. The sixth sub-figure in
Figure 3 shows the specific number of triggers from a randomly selected experiment. The average update rate of MASs without a time interval is the highest, reaching 38.25%. After adding the event-related time interval, the update rate decreases to 30.83%. It is obvious that the frequency of state information updates decreases, which will lead to a reduction in communication costs. When the fixed-length time interval strategy is adopted, the system performance is related to the value of
. When
= 3, the update rate is 26.33%, which is slightly improved compared with the event-related time interval. When
= 5, the update rate further drops to 21.83%, and, when
increases to 10, the update rate is only 15.08%. These data fully demonstrate that the fixed-length time interval strategy can effectively reduce the update frequency, and the extent of reduction in the update rate is related to the length of the time interval.