Adaptive Event-Triggered Consensus of Multi-Agent Systems in Sense of Asymptotic Convergence

In this paper, the asymptotic consensus control of multi-agent systems with general linear agent dynamics is investigated. A neighbor-based adaptive event-triggering strategy with a dynamic triggering threshold is proposed, which leads to a fully distributed control of the multi-agent system, depending only on the states of the neighboring agents at triggering moments. By using the Lyapunov method, we prove that the states of the agents converge asymptotically. In addition, the proposed event-triggering strategy is proven to exclude Zeno behavior. The numerical simulation results illustrate that the agent states achieve consensus in sense of asymptotic convergence. Furthermore, the proposed strategy is shown to be scalable in case of variable agent numbers.


Introduction
The recent research into multi-agent systems (MASs) paves the way for energy management and scheduling in smart grids [1], robot formation [2] and sensor networks [3,4].Due to its wide applications, the consensus of MAS has attracted widespread attention [5,6].As the size of MAS increases, the limits of the communication bandwidth and energy resources of the agents (such as mobile robots powered by batteries) have become difficult issues that need to be resolved in controller design.To deal with these problems, distributed communication was proposed by means of exchanging local information between neighboring agents [7].Event-triggered strategies provide an effective solution in reducing the communication frequency and thus energy consumption of agents, by transmitting the state of agents only when the triggering condition is activated [7][8][9][10].
The cooperation control of MAS enables the organized agents to accomplish complex tasks.The research in this area attracts large attention for decades [11].During this time, it evolves from centralized to distributed control [12].Normally, centralized methods depend on massive communication, with which the bandwidth of communication of the MAS in large scale is heavily burdened.In addition, a high communication frequency leads to fast energy consumption, which shortens the usage time of the system powered by batteries.For these reasons, distributed control has been paid more attention.In most works of distributed control, the agents are assumed to communicate continuously.Furthermore, agents are aware of the topology of the communication network, such as in [13,14].However, the continuous control requires the high-frequency communication of agents.
To overcome this drawback, sampled-data control is proposed, which utilizes timetriggered strategies with a predetermined sampling sequence.Recently, a large amount of works on sampled-data control was developed, such as [15][16][17].In these works, agents need to communicate with their neighbors synchronously and select the appropriate sampling period, which is difficult to implement in practice.
The event-triggered control, which is more efficient than the sampled-data control in reducing unnecessary information transmission, is originally proposed in [18].This method is then implemented in MASs in [19][20][21][22][23].The critical issue of the event-triggered control is to determine the events and the triggering mechanism.The updates to the controller and exchanges of information occur exclusively when the triggering condition is satisfied.In general, the implementation of event-triggered strategies in MASs usually depends on the eigenvalues of the Laplacian matrix, which is global information associated with the communication graph, as shown in [24][25][26][27][28][29].In order to improve the strategy, fully distributed event-triggered control was proposed very recently [30,31].The consensus error is proven to be uniformly ultimately bounded.
Recently, adaptive control was successfully proposed in MASs.The agent with firstorder dynamics in MASs was considered in [27].Then, the control strategy was developed in agents with general linear agent dynamics [28].Within this framework, more issues such as actuator and sensor faults were considered [32].Considering bounded uncertainties, a static non-smooth event-triggered protocol is investigated in [29].In [33], the external disturbances are considered.Notably, in the aforementioned papers, real-time feedback of the neighbor state is required.
To further reduce the frequency of communication between agents, dynamic eventtriggered adaptive control was proposed recently [34,35], in which the event-triggered threshold was a dynamic variable.In [34], the dependence of eigenvalues of Laplacian matrix is mandatory.It is then released in [35], where the neighbors broadcast their information at the agents' triggering instants.This increases the communication burden at triggering moments.
In this paper, we make significant modifications to the control strategy for MASs with a linear dynamic, which is proposed in [35].Differently from the objective of the research in [23], where the consensus control of discrete multi-agent systems with parameter uncertainties was investigated, we focus on the adaptive event-triggered control for general linear agents in this paper.The main contributions are twofold.First, the proposed control strategy is independent on global information, and the consensus error converges asymptotically which is different from [31].Second, continuously reading and listening to neighbor states is not required for any agents.Each agent exclusively broadcasts its information to neighbors when the event-triggered condition is satisfied.We also prove that the Zeno behavior can be excluded.The simulation results show that the consensus error converges asymptotically and the communication frequency is significantly decreased.
The rest of this paper is organized as follows.In Section 2, some preliminaries on graph theory are given.In Section 3, the adaptive event-triggered strategy is proposed.In Section 4, illustrative numerical simulation is carried out, which demonstrates the effectiveness of the theoretical results.Some convolutions are given in Section 5.

Preliminaries 2.1. Problem Statement
We consider a multi-agent system with N agents.Each agent is modeled by general linear dynamics as follows.
where x i (t) ∈ R n and u i (t) ∈ R m represent the state and control input of agent i, respectively.Matrices A ∈ R n×n and B ∈ R n×m are both constant matrices.The objective of this paper is to design a fully distributed event-triggered consensus protocol for a leaderless network of the multi-agent system with agent dynamics modeled by (1).The requirements for this goal are as follows: the communication of the agent is distributed; each agent can only communicate with its neighboring agents and can only obtain their state information.The state variables of all agents ultimately reach asymptotic consensus (the definition of consensus in the multi-agent system is given later).Considering the constraints of communication network bandwidth and energy in real systems, it is also required that agents cannot communicate in real-time.Definition 1 ([34]).Consensus of Multi-Agent Systems.In a multi-agent system with N agents, for any given initial state, if lim t→∞ ∥x i (t) − x j (t)∥ = 0,wherei, j = 1, 2, . . ., N, then the states of agents achieve consensus.

Graph Theory
In a multi-agent system, the communication between agents can be modeled using graph theory.In the system, agents are represented by nodes, and the communication between agents is represented by edges.A graph G = (V, E ), where V is a non-empty set of nodes in the graph, and E ⊆ V × V is a set of edges that do not intersect with the nodes.It is worth noting that an element in E is denoted by (i, j), which indicates that node i can send information to node j.In this case, node i is also called a neighbor of node j, and node j is an out-neighbor of node i.The set of neighbors of node i is denoted as N i = {j : (j, i) ∈ E }, and the number of neighbors is defined as |N i |.If, for any (i, j) ∈ E in the graph, there must exist (j, i) ∈ E , then the graph is called an undirected graph.In an undirected graph, if there exists a path (consisting of one or more edges) between every pair of distinct nodes, the undirected graph is connected; otherwise, it is not connected.For a graph G, the adjacency matrix is denoted by A = [a ij ] ∈ R N×N , where the elements are defined as a ii = 0, the off diagonal elements Two assumptions are given as follows.
Assumption 1.The graph G is undirected and connected.

Lemma 1 ([31]
).The Laplacian matrix has a zero eigenvalue, and the corresponding eigenvector is 1 n , which is a column vector with all elements equal to 1.Moreover, all non-zero eigenvalues of the Laplacian matrix have positive real parts.In an undirected graph, if the graph is connected, its corresponding Laplacian matrix L also has a single zero eigenvalue.The smallest non-zero eigenvalue of the matrix L, denoted as λ 2 (L), satisfies the equality x T x ).

Main Results
In this section, an adaptive event-triggered consensus protocol is proposed.A schema of the controller is shown in Figure 1.In general, an event-based consensus protocol mainly consists of an event-based control law and a triggering function of the agent [30].The controller design input and the triggering function are given in the sequel.

The Consensus Control Module
Inspired by paper [31], we propose the adaptive control input for agent i as follows.
where c ij (0) ≥ 0. Matrices E ∈ R m×n and F = RBB T R ∈ R n×n are feedback gains of the controller.The continuous state of agent i is estimated by the following equation. where Remark 1.In (2), the adaptive parameter c ij (t) is used to regulate the weights of communication links on the topology.The protocol of c ij (t) is designed only based on the state variables of agent i and j at triggering moments.Therefore, the controller is fully distributed.In addition, the continuous measuring and listening of agent states are avoided.

The Event-Triggering Protocol
The triggering function is designed as follows: where e i (t) ≜ xi (t) − x i (t), i = 1, . . ., N is the estimating error of the state of agent i. Parameters ρ i and σ i are both positive constant scalars, which are not equal for agents.The selection of these parameters is discussed later.
The triggering time for agent i is defined by , where t i k represents the k-th triggering time of agent i, and f i (t) ≥ 0 is called the event trigger condition.Note that t i 1 = 0.As can be seen from the event-based consensus framework in Figure 1, the update of the controller signal of agent i depends on its own states at its triggering moments and the states of its neighbors at their latest triggering moments.
In the second equation in (4), by choosing some positive scalar ρ i and σ i , we can obtain θi > −(ρ i + σ i )θ i except in the triggering moments.According to the comparison principle, it holds that θ i (t)

Consensus Analysis
In the sequel, we will prove that the multi-agent system achieves asymptotic consensus under the proposed protocol on avoiding Zeno behavior.Let us define by ξ i ≜ x i − (1/N) ∑ N j=1 x j the consensus error of agent i.The compact form of ξ i for all the agents is represented by vector ξ T = [ξ 1 , . . ., ξ N ].
Theorem 1.With Assumptions 1 and 2 and choosing E = −B T R, where R > 0 is the solution of the algebraic Riccati equation (ARE) RA + A T R − RBB T R + I = 0, then, the consensus error ξ ultimately achieves asymptotic consensus and the adaptive parameter c ij in Equation ( 2) is uniformly ultimately bounded, if the adaptive protocol (2) satisfies c ij (0) ≥ 0, and the event-triggering function (4) satisfies θ i (0) > 0, σ i < 1, and Proof.Based on the design of the control input ) and the ARE, we chose the feedback matrix in the control input as E = −B T R and the Lyapunov function candidate as follows: Its time derivative yields ξi Since we assume that the communication topology network of MAS in this paper is undirected.The entries in the adjacency matrix satisfy a ij = a ji = 1.Then, the adaptive gains are given such that By substituting Equation ( 7) in (6), we can rewrite V1 as follows.
Recalling that ξ i − ξ j = x i − x j and e i = xi − x i , we obtain the following equation.
To utilize the algebraic Riccati equation (ARE), we add a second term V 2 into (5), which is given as follows.
In the sequel, we will discuss how the Zeno behavior is excluded.Let us define Theorem 2. If the parameters of the controller (2) and triggering function (4) are selected satisfying c ij (0) ≥ 0, θ i (0) > 0, σ i < 1, and σ i + ρ i > 1, then, there does not exist any positive finite constant scalars T, which makes The derivative of ∥e i (t)∥ yields where t ∈ [t i k , t i k+1 ).According to theorem 1, we infer that We process the proof by contradiction in the sequel.We first assume that the Zenobehavior exists, such that lim a.
When ∥A∥ ̸ = 0, By using the comparison principle, we can rewrite (28) as follows.
According to the aforementioned analysis, the Zeno behavior of MAS is excluded.That ends the proof.

Simulation Results
In this section, we carry out the numerical simulation to validate the proposed theoretical result and compare it to some related works to show the improvements.
We consider the following two scenarios.In the first scenario, a multi-agent system with five agents is considered.The dynamics of each agent are modeled by a third-order linear system (34).
The initial states of the five agents are given by x The communication topology of the MAS is shown in Figure 2. It is evident that the graph is undirected and connected, as its Laplacian yields . In controller (2), we choose c ij (0) = 0.25, if j ∈ N i .The gain matrices are obtained by solving the ARE RA + A T R − RBB T R + I = 0, as follows.In (4), the initial value of the event-triggered threshold is assigned by θ i (0) = 0.25.The scalars ρ i and σ i are given by ρ i = 0.3, σ i = 0.8, where i ∈ V.
For the sake of comparison, we, respectively, carried out the simulation using the control strategies proposed in [31,35].In both simulations, the agent model and initial states are given the same as in (34).In [31], the adaptive parameters are also initialized by c ij (0) = 0.25, if j ∈ N i .In [35], the parameters of the triggering threshold are given by ρ i = 0.3, σ i = 0.8, where i ∈ V as well.In the second scenario, we consider a MAS with eight agents.The dynamic of each agent is modeled as follows.
The initial states of the eight agents are given by x The network topology of the MAS with eight agents is shown in Figure 3.The also is undirected and connected, and its Laplacian matrix is as follows.
In controller (2), we choose c ij (0) = 0.25, if j ∈ N i .The gain matrices are obtained by solving the ARE RA + A T R − RBB T R + I = 0, as follows.In (4), the initial value of the event-triggered threshold is assigned by θ i (0) = 0.25.The scalars ρ i and σ i are given by ρ i = 0.3, σ i = 0.8, where i ∈ V.
For the sake of comparison, we, respectively, carried out the simulation using the control strategies proposed in [31,35].In both simulations, the agent model and initial states are given the same as in (35).In [31], the adaptive parameters are also initialized by c ij (0) = 0.25, if j ∈ N i .In [35], the parameters of the triggering threshold are given by ρ i = 0.3, σ i = 0.8, where i ∈ V as well.
Using the controllers proposed in this paper and in [31,35], the comparison of the first components ξ i(1) , i ∈ V of consensus error ξ is given in Figure 4, in which we observe that by using the proposed control, the MAS achieves consensus asymptotically and the convergence speed is faster than that in [31,35]. ,i ∈ V, using the control strategies in [31,35] and this paper respectively.(b) First components of consensus errors of eight agents ξ i(1) , i ∈ V, using the control strategies in [31,35] and this paper respectively.
By using the three methods, we also compare the control of the agents in Figure 5.We can see that both the control outputs using the proposed strategy and that in [35] have less fluctuation than [31].However, our proposed method has fewer triggering times than [35], as shown in Figure 6.Table 1 and Table 2, respectively, show the statistics of the triggering times of five and eight agents, using the three control strategies.According to Tables 1 and 2, we can observe that the triggering frequency is significantly reduced by using our proposed controller, compared with that in papers [31,35].
In order to show the scalability of the proposed strategy, we consider two scenarios, i.e., one agent joins (leaves) the group of agents at a certain time instant.We reconsider the MAS represented by the topology in Figure 2.
In the first scenario, the 6th agent joints the group at 3 s.Then, the topology of the MAS becomes Figure 7.By using the proposed strategy, the consensus errors ξ are shown in Figure 8, in which we can observe that the MAS achieves consensus asymptotically even if the 6th agent joints to the network at 3 s.In the second scenario, the 2nd agent is disconnected with the other agents at 3 s.Then, the topology of the MAS becomes Figure 9.The consensus error ξ is shown in Figure 10, in which we can observe that the MAS also achieves consensus asymptotically by using the proposed strategy, even if the 2nd agent is disconnected with neighbors at 3 s.

Conclusions and Perspectives
In this paper, we address the consensus problem of multi-agent systems with general linear dynamics.A dynamic event-triggered adaptive control strategy is proposed.Compared with the existing works, our proposed strategy leads to a consensus of the agents' states in the sense of asymptotic convergence.Furthermore, it improves the convergence speed and reduces the triggering frequency.The proposed strategy is fully distributed and scalable.Global information is not required by using the strategy.The agent states achieve consensus asymptotically even if the communication topology is switching.Under this strategy, continuous communication among agents and simultaneous broadcasts of the neighbors' information are avoided.
In practice, time delay widely exists in discontinuous communications.The consensus problem with an uncertain and stochastic communication delay is an open topic for further study.On the other hand, the actuator saturation of agents should be considered, which leads to nonlinearities in the agent dynamics.In this case, the contradiction of triggering frequency and the response rapidity should be treated in the future.

Figure 1 .
Figure 1.Event-triggered strategy framework for agent i.

Figure 2 .
Figure 2. Undirected network topology of the MASs with five agents.

Figure 3 .
Figure 3. Undirected network topology of the MASs with eight agents.

Figure 4 .
Figure 4. First components of consensus errors.(a) First components of consensus errors of five agents ξ i(1) , i ∈ V, using the control strategies in[31,35] and this paper respectively.(b) First components of consensus errors of eight agents ξ i(1) , i ∈ V, using the control strategies in[31,35] and this paper respectively.

Figure 5 .
Figure 5.Control outputs.(a) Control outputs of the five agents, using the strategies in[31,35] and this paper.(b) Control outputs of the eight agents, using the strategies in[31,35] and this paper.

Figure 6 .
Figure 6.Triggering instants of agent i, i = 1, . . ., 5. (a) The triggering instants of agent i under the control strategy proposed in [35]; (b) The triggering instants of agent i under the control strategy proposed in this paper.

Figure 7 .
Figure 7. Undirected network topology of the MASs with five agents where 6th agent joints at 3 s.

Figure 9 .
Figure 9. Undirected network topology of the MASs with five agents where 2nd agent is disconnected at 3 s.

Table 1 .
The number of events driven by event triggers in MAS with five agents.

Table 2 .
The number of events driven by event triggers in MAS with eight agents.