You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

4 December 2025

Leader–Follower Consensus of Switched Multi-Agent Systems Under Distributed Event-Triggered Scheme

,
,
,
and
School of Statistics and Data Science, Jilin University of Finance and Economics, Changchun 130117, China
*
Authors to whom correspondence should be addressed.

Abstract

This paper proposes an event-triggered distributed switching strategy, which solves the consensus problem of switching multi-agent systems. By introducing an event-triggering mechanism, the exchange signals of each agent are updated only at discrete trigger moments, thereby reducing communication and computing loads. The design of the trigger conditions takes into account the error between the follower and its neighbor states to ensure consensus is reached. By constructing multiple Lyapunov functions and using loop functions related to event triggering, the conservativeness of multiple Lyapunov function methods is relaxed. It is shown that the closed-loop system achieves exponential leader—follower consensus even when all subsystems are unstable, while strictly excluding Zeno behavior. Numerical simulations verify the effectiveness of the proposed method.

1. Introduction

Cooperative control of multi-agent systems (MASs) has flourished in recent decades, with applications spanning various domains such as formation control [1], intelligent transportation [2], networked estimation [3], cooperative guidance of multiple missiles [4], and distributed optimization [5]. Among cooperative control problems, the consensus problem is one of the most extensively studied and widely applied coordination issues. The consensus control refers to designing appropriate protocols for each agent so that, using only information exchanged with neighbors, all agent states eventually reach a common value. For MASs with single-integrator dynamics, seminal works provided sufficient conditions for achieving consensus under both directed and undirected graphs [6]. Consensus of second-order integrator MASs was studied in [7], which proposed a protocol design and analysis strategy enabling agreement in both position and velocity. Consensus problems with switching network topologies have also been addressed. For instance, ref. [8] considered switching communication graphs that are not strongly connected at every instant; by utilizing graph-theoretic tools and relative state information, joint connectivity conditions ensuring consensus were derived.
In the consensus research of multi-agent systems, traditional control protocols usually take the continuous communication between agents as a prerequisite. However, in practical applications, the limitations of network bandwidth and the constraints of equipment resources make this continuous communication mechanism not only lead to high control costs, but also may cause the system to be unsustainable due to excessive energy consumption. To break through this technical bottleneck, scholars have begun to focus on the optimized design of communication mechanisms, striving to enhance resource utilization efficiency while maintaining system stability. The classic periodic sampling control method [9,10] converts continuous signal transmission into periodic data exchange by introducing discrete-time communication mechanisms. However, due to the lack of attention to the dynamic changes of the agent itself, the designed periodic sampling control is often conservative [11,12], but determining the optimal sampling rate to ensure stability and performance remains challenging. An excessively long sampling interval may reduce performance, while an excessively high sampling rate can lead to unnecessary transmission and frequent controller updates, thereby lowering efficiency. To save resources, reduce controller updates and avoid unnecessary information transmission during the control process, a method that forces agents to perform “on demand” has been proposed, namely event-triggered control [13]. Among them, the control signal is updated only when specific trigger conditions are met. This method significantly reduces the communication burden by avoiding continuous or periodic communication.
In the research of multi-agent cooperative control problems, the consistency process relies on the information interaction among agents. Traditional distributed control is usually based on the assumption that the communication network among all agents is ideal. However, due to the limited network resources in practical applications, ideal network communication is clearly not feasible [14,15], especially in complex applications such as multi-agent systems. For instance, in wireless communication, signals may be interfered with by factors such as electromagnetic interference, leading to a decline in communication quality and resulting in data loss or delay. These actual existing limiting factors severely restrict the performance of multi-agent systems in real environments, making traditional control methods based on the assumption of ideal communication difficult to apply directly. In recent years, event-driven control has been introduced into the research of multi-agent systems, providing new ideas for solving the problem of cooperative control in resource-constrained environments and achieving remarkable results. For instance, reference [16] proposed a novel distributed event-driven mechanism, which only requires the discrete state information of neighboring nodes to achieve consistent control of multi-agent systems under fixed and switched topologies. Reference [17] extends the event-driven control strategy to nonlinear multi-agent systems, deduces the sufficient conditions for achieving consistency in this system, and provides a theoretical tool for solving the cooperative control problem of random disturbances. For the situation where communication resources are limited, reference [18] proposed a hybrid driving mechanism of continuous event-driven mechanism and time-driven mechanism, achieving leader-following consistency in stochastic multi-agent systems. The static event-driven mechanism significantly enhances the resource utilization efficiency of multi-agent systems by reducing unnecessary communication and control updates, demonstrating its wide applicability and significant value in complex multi-agent systems. For multi-agent systems with external disturbances, reference [19] proposed an event-driven scheme with dynamic thresholds, which not only effectively solved the leader-follower bounded consistency problem of the system, but also significantly improved the anti-interference ability of the system.
Recently, scaled consensus problems for hybrid multi-agent systems have also attracted considerable attention. In [20], Donganont studied scaled consensus of hybrid MASs via impulsive protocols, where continuous-time and discrete-time agents are coordinated under a common impulsive framework. In a related work, Donganont [21] further developed finite-time leader–following scaled consensus strategies, providing conditions under which the followers track a scaled version of the leader in finite time. Closely related to edge-based coordination, Park et al. [22] investigated scaled edge consensus in hybrid MASs, where the inter-agent couplings are designed such that the edge states achieve scaled agreement under pulse-type communication. These results highlight the usefulness of hybrid and scaled consensus concepts under impulsive or pulse-modulated protocols. In contrast, the present paper focuses on unscaled leader–follower consensus for purely continuous-time switched MASs, and combines an event-triggered communication mechanism with a looped-functional-based switching law design.
It should be pointed out that in all the above results, the dynamics of the agent remain unchanged. In fact, many natural systems and artificial systems contain dynamic behaviors with different patterns. Take the street network as an example. The street network is a system composed of roads and traffic lights in a specific area, with the traffic lights switching between different colors. Therefore, introducing the switching mechanism into multi-agent systems is of extremely significant importance. To achieve this goal, we need to combine the concept of switching systems with the characteristics of multi-agent systems.
Reference [23] addresses the COR problem in switched multi-agent systems with input saturation by proposing sufficient conditions derived from a distributed control scheme. Meanwhile, the consensus problem for uncertain switched MASs with delays under a fixed directed topology is examined in [24]. References [25,26] designed a new switching law for switching systems based on event triggering and state dependence. However, the design of the switching signal in reference [25] led to a mismatch in the dynamic responses between the subsystem and the controller, resulting in asynchronous switching. However, the new switching law designed in reference [26] does not exhibit asynchronous behavior. Reference [27] designed a switching law dependent on agents, allowing all subsystems of each agent to be non-stabilizing, ensuring the necessary and sufficient conditions for the collaborative output regulation problem of switching multi-agent systems. However, the involved switching law cannot guarantee the length of the switching time interval and may trigger Zeno behavior. Therefore, references [28,29] designed a fully distributed integral event-triggering mechanism and an agent-dependent switching law with residency time constraints, ensuring that the positive lower bound of the adjacent switching time intervals of each agent is guaranteed and avoiding Zeno behavior.
Building upon the above insights, the primary goal of this paper is to develop a distributed event-triggered control and agent-dependent switching framework that guarantees leader–follower consensus for switched multi-agent systems while rigorously excluding Zeno behavior, even when all subsystem dynamics are unstable. The main conclusions show that, under the proposed scheme, each follower can exponentially track the leader despite switching among unstable modes, and that positive lower bounds on inter-event times can be explicitly established for all agents and modes, thereby ensuring Zeno-free operation and a substantial reduction of communication and computation load.
(i)
We design a distributed event-triggered consensus scheme for leader–follower-switched MASs in which all subsystems may be unstable. In contrast to [16,28,29], which mainly focus on cooperative output regulation and often rely on stabilizable subsystems, our framework explicitly handles unstable agent dynamics while still guaranteeing consensus.
(ii)
We develop a state-dependent switching law that is only updated at event-triggered instants. Different from the average-dwell-time-based switching laws in [23,28] and the joint triggering strategies in [25,26], the proposed law requires only intermittent detection at trigger times, thereby reducing continuous monitoring and avoiding the asynchronous switching issues reported in [25].
(iii)
We construct a two-sided looped Lyapunov–Krasovskii functional tailored to the event-triggered setting. This function removes the temporary increase of Lyapunov functions during the mismatched intervals between the minimal switching and triggering instants, thus relaxing the conservatism of standard multiple-Lyapunov-function approaches and providing tractable LMI conditions for leader–follower consensus under the proposed event-triggered switching scheme.
The remainder of this paper is organized as follows. Section 2 reviews basic graph-theoretic notions and formulates the leader–follower consensus problem for switched multi-agent systems. Section 3 presents the proposed prediction-based event-triggered control law and agent-dependent switching strategy and establishes Zeno-free properties and consensus conditions via a two-sided looped Lyapunov–Krasovskii functional and associated LMIs. Section 4 provides numerical examples that illustrate the performance of the proposed scheme and verify the theoretical results. Section 5 concludes the paper and discusses possible directions for future research.

2. Preliminaries and Problem Formulation

2.1. Graph Theory Preliminaries

The communication topology among the agents is modeled by a weighted digraph G = ( V , E ) , where V = { 1 , 2 , , N } represents the set of follower agents, and E V × V is the set of directed edges. An edge ( j , i ) E indicates that agent i can receive information from agent j. The neighbor set of node i is N i = { j V ( j , i ) E } .
Let A = [ a i j ] R N × N denote the adjacency matrix of G , defined by a i j > 0 if ( j , i ) E and a i j = 0 otherwise. The in-degree of node i is d i = j = 1 N a i j , and D = diag { d 1 , , d N } is the in-degree matrix. The Laplacian of the graph is L = D A . To incorporate a leader (denoted node 0), we consider an augmented graph  G ˜ obtained by adding node 0 to G . The leader-to-follower influence is described by a i 0 : we set a i 0 > 0 if follower i can receive information from the leader, and a i 0 = 0 otherwise. Let W = diag { a 10 , a 20 , , a N 0 } . We define the augmented Laplacian as
H : = L + W .
In this paper, we assume the following about the communication topology:
Assumption 1
([29]). The follower graph G among the N followers is undirected and connected. Moreover, the augmented leader–follower graph obtained by adding the leader node 0 admits a directed spanning tree rooted at the leader. Equivalently, the pinned Laplacian H = L + W is positive definite.
It is worth noting that the connectivity level of the communication graph has a direct impact on both the efficiency and the performance of the overall system. Intuitively, stronger connectivity leads to faster information diffusion and hence faster consensus, but it requires more communication links and a higher communication burden. Conversely, reducing connectivity decreases the communication load, but typically slows down the convergence and may make the stability conditions more conservative. In our framework, the connectivity assumptions ensure that leader–follower consensus is still guaranteed for the considered graphs, while the event-triggered mechanism is designed to reduce the effective communication load without changing the underlying topology.

2.2. Problem Formulation

We consider a leader–follower MAS consisting of one leader and N followers ( i = 1 , 2 , , N ). The agents’ dynamics switch among multiple modes governed by a common switching signal σ ( t ) . The leader, which provides a reference trajectory, follows:
x ˙ 0 ( t ) = A σ ( t ) x 0 ( t ) ,
and each follower i has the following dynamics:
x ˙ i ( t ) = A σ ( t ) x i ( t ) + B σ ( t ) u i ( t ) ,
where x 0 ( t ) , x i ( t ) R n are the leader and follower state vectors, respectively; u i ( t ) R m is the control input for follower i. For each mode j in the set I m = { 1 , 2 , , m } , A j R n × n and B j R n × m are the state and input matrices of that subsystem. The switching signal σ ( t ) : [ 0 , ) I m is a piecewise constant function characterized by a sequence of switching instants { t q } q Z + : we write σ ( t ) = j q for t [ t q , t q + 1 ) , meaning the j q -th subsystem is active on that interval. We assume that switching instants have no finite accumulation points (i.e., a positive minimum dwell-time exists), and no state jumps occur at switching times.
The control objective is to design a distributed control law u i ( t ) and a state-dependent switching law σ ( t ) , updated only at event-triggered instants, such that the switched MAS achieves leader–follower consensus.
Definition 1
([30]). The switched MAS is said to achieve leader-follower consensus if, for any initial states x i ( 0 ) and for all agents i , j V , the designed control inputs u i ( t ) guarantee that
lim t x i ( t ) x 0 ( t ) = 0 , i = 1 , 2 , , N .
In other words, each follower’s state asymptotically converges to the leader’s state.
Remark 1.
The above definition follows the standard asymptotic notion of leader–follower consensus, that is, the tracking errors converge to zero as time goes to infinity. In the main results below, we will in fact derive stronger sufficient conditions ensuring that this convergence is exponential under the proposed event-triggered switching scheme.

3. Main Results

3.1. Event-Triggered Control and Switching Law Design

Our distributed control framework consists of three key components: (i) state prediction and estimation error, (ii) the control protocol, and (iii) an event-triggered mechanism that determines when to transmit and switch. These are detailed as follows.
Each follower i employs a state predictor between events. Let { t k i } k = 0 be the sequence of event times for agent i, with t 0 i = 0 . On each interval [ t k i , t k + 1 i ) , follower i predicts its state using the last sampled state at t k i . Specifically,
x ^ i k ( t ) = e A σ ( t ) ( t t k i ) x i ( t k i ) , t [ t k i , t k + 1 i ) ,
is the predicted state (an open-loop estimate assuming no further input updates). The prediction error is defined as
e i ( t ) = x ^ i k ( t ) x i ( t ) ,
which is reset to zero at each event t k i and grows when no new information is received about agent i’s true state.
We propose the following control input for follower i:
u i ( t ) = K σ ( t ) j = 1 N a i j x ^ j k ( t ) x ^ i k ( t ) + a i 0 x 0 ( t ) x ^ i k ( t ) ,
where K σ ( t ) R m × n is the mode-dependent state-feedback gain matrix associated with the active subsystem σ ( t ) , i.e., K σ ( t ) = K j whenever σ ( t ) = j . For each mode j I m , the gain K j is designed off-line by solving the LMI-based synthesis conditions in Theorem 3, where matrices P j > 0 and L j R m × n satisfying (33) are first computed and then K j : = L j P j 1 is recovered. In this way, the closed-loop disagreement dynamics δ ˙ = ( I N A j H B j K j ) δ are exponentially stable, and the gains K j determine the decay rate and damping of the followers’ responses.
The vector inside the brackets in (5) collects the relative state information available to follower i, namely the differences between its predicted state x ^ i k ( t ) and the predicted states x ^ j k ( t ) of its neighbors j N i , together with the leader–follower mismatch x 0 ( t ) x ^ i k ( t ) (we assume that the leader broadcasts its state to its neighbors at event times, so that x ^ 0 ( t ) x 0 ( t ) ). Physically, K σ ( t ) scales this aggregated relative information into the actual control input u i ( t ) R m , thereby shaping how strongly each follower reacts to the discrepancies with its neighbors and the leader. With this structure, the control law remains fully distributed, because agent i only requires information from its immediate neighbors.
Each follower i decides its next transmission (and switching) instant by monitoring an event-triggering function f i ( t ) that measures the difference between its state and its neighbors’ states. Specifically, starting from an event time t k i , the ( k + 1 ) -th event time is defined as follows:
t k + 1 i = inf t > t k i | f i ( t ) 0 ,
the first time t after t k i that f i ( t ) becomes non-negative.
We design the trigger function as follows:
f i ( t ) = e i ( t ) γ 1 j = 1 N a i j x ^ j k ( t ) x ^ i k ( t ) + a i 0 x 0 ( t ) x ^ i k ( t ) ,
where γ 1 > 0 is a design parameter (trigger threshold). In words, agent i triggers an event when the norm of its prediction error e i ( t ) grows large relative to a weighted norm of its neighbors’ relative state discrepancies. Prior to the threshold being reached, we have f i ( t ) < 0 , meaning e i ( t ) < γ 1 · of the neighbor difference term; this condition will be used to ensure stability and exclude Zeno behavior.
Building on this event-triggered framework, we now design a state-dependent switching law based on a common switching signal σ ( t ) . Let { t q } q Z + denote the ordered sequence of global event instants, i.e., the union of all local event times { t k i } i = 1 N . At each t q the active mode is updated as
σ ( t q + ) = arg min j I m δ ( t q ) ( I N P j ) δ ( t q ) ,
and remains constant for all t ( t q , t q + 1 ) . In other words, all agents share the same mode σ ( t ) , while the switching instants are determined in a distributed manner by the local event-triggering conditions.
Remark 2.
In Figure 1, the block “Relative state measurement & prediction of agent i” corresponds to the predictor x ^ i k ( t ) and the prediction error e i ( t ) defined in (4), whereas the block “Distributed controller” implements the control input u i ( t ) given by (5). The block “Event-triggered condition f i ( t ) ” realizes the triggering rule (6) and generates the local event instants { t k i } k 0 . At each global event time t q , the block “State-dependent switching law σ ( t ) ” updates the mode according to (8), and the updated mode is shared by both the follower dynamics and the distributed controller. The dashed arrows represent the communication network, which is only used at event times to broadcast the sampled states to neighboring agents.
Figure 1. Configuration of the proposed prediction-based event-triggered switching scheme for a generic follower i.
Remark 3.
Conventional state-dependent switching laws often require continuous monitoring of agents’ states and frequent information broadcasts, imposing significant communication and computation burdens. In contrast, the above event-triggered switching law operates on an on-demand basis: it updates the mode only at discrete trigger times. This on-demand strategy eliminates the need for continuous state monitoring or broadcasting, reducing network usage and preventing unnecessary switching.
Remark 4.
The event-triggered switching mechanism guarantees that controller updates and subsystem switches occur simultaneously when the trigger condition is met. This inherent synchrony means the controller and plant remain matched to each other’s mode, which avoids performance degradation from asynchronous switches. In effect, by limiting switches to trigger instants and imposing a dwell-time, the strategy maintains system stability while substantially reducing the switching frequency.
Remark 5.
Our event-triggered mechanism is fully distributed in the sense that each agent uses only locally available information at triggers. Between events, no continuous communication is needed. Each agent i only requires its neighbors state information at the moments defined by (6). This significantly alleviates network traffic among agents and reduces each controller’s computational load.
With the control law (5), trigger rule (6)–(7), and switching law (8) in place, we next derive the closed-loop error dynamics and present the main theoretical results.
Let δ i ( t ) : = x i ( t ) x 0 ( t ) denote the tracking error of follower i relative to the leader. Stacking all follower-leader errors, define δ ( t ) : = [ δ 1 T ( t ) , δ 2 T ( t ) , , δ N T ( t ) ] T R N n . From (1)–(2) and the control law (5), we can derive the closed-loop dynamics for δ i ( t ) . First, note that
δ ˙ i ( t ) = x ˙ i ( t ) x ˙ 0 ( t ) = A σ ( t ) x i ( t ) + B σ ( t ) u i ( t ) A σ ( t ) x 0 ( t ) .
Substituting u i ( t ) from (5) and rearranging, we obtain:
δ ˙ i ( t ) = A σ ( t ) δ i ( t ) + B σ ( t ) K σ ( t ) j = 1 N a i j x ^ j k ( t ) x ^ i k ( t ) + a i 0 x 0 ( t ) x ^ i k ( t ) = A σ ( t ) δ i ( t ) + B σ ( t ) K σ ( t ) j = 1 N a i j δ j ( t ) δ i ( t ) a i 0 δ i ( t )
+ B σ ( t ) K σ ( t ) j = 1 N a i j e j ( t ) e i ( t ) a i 0 e i ( t ) .
The step from (9) to (10) uses the definitions δ j = x j x 0 and e j = x ^ j k x j . One can verify that x ^ j k x ^ i k = ( δ j δ i ) + ( e j e i ) and x 0 x ^ i k = δ i e i , which yields the decomposition into δ -terms and e-terms above.
By collecting terms for all N followers, the overall closed-loop error dynamics can be written compactly in vector form. Using the augmented Laplacian H = L + W defined earlier, we have the following:
δ ˙ ( t ) = I N A σ ( t ) δ ( t ) H B σ ( t ) K σ ( t ) δ ( t ) H B σ ( t ) K σ ( t ) e ( t ) ,
where e ( t ) : = [ e 1 T ( t ) , e 2 T ( t ) , , e N T ( t ) ] T stacks the prediction errors of all followers. The i-th block of H corresponds to agent i’s neighbors.
This section presents the design of a distributed ETS, an agent-dependent switching signal, and a corresponding control protocol. The objective is to achieve leader-following consensus for switched MASs while explicitly excluding Zeno behavior.

3.2. Exculed Zeno Behavior

Theorem 1.
Consider the closed-loop system (11) under the distributed event-triggered mechanism (6)–(7). For each follower agent i, there exists a uniform positive lower bound τ min > 0 on its inter-event times t k + 1 i t k i . In particular,
τ i : = t k + 1 i t k i 1 A l ln 1 + γ 1 A l B l K l > 0
for any mode l active during [ t k i , t k + 1 i ) . Thus, no agent exhibits Zeno behavior (infinitely many triggers in finite time) under the proposed triggering law.
Proof. 
Without loss of generality, consider an interval [ t m η , t m η + 1 ) during which the network is in mode l = σ ( t ) . Suppose there are q trigger instants for agent i in this interval, labeled t k i , t k + 1 i , , t k + q i with t k i = t m η and t k + q i < t m η + 1 . For any t in [ t k + r i , t k + r + 1 i ) (where r = 0 , 1 , , q 1 ), the prediction error is e i ( t ) = x ^ i k + r ( t ) x i ( t ) .
We examine the growth of e i ( t ) on [ t k + r i , t k + r + 1 i ) . Using the system dynamics (10), one can bound the right-hand Dini derivative of e i ( t ) as:
D + e i ( t )   A l e i ( t ) + B l K l j = 1 N a i j x ^ j k + r ( t ) x ^ i k + r ( t ) + a i 0 x 0 ( t ) x ^ i k + r ( t ) .
At the trigger instant t k + r i , we have e i ( t k + r i ) = 0 (the error is reset to zero). Integrating the differential inequality (12) from t k + r i to some t in ( t k + r i , t k + r + 1 i ) yields:
e i ( t )   B l K l e A l ( t t k + r i ) t k + r i t j = 1 N a i j x ^ j k + r ( s ) x ^ i k + r ( s ) + a i 0 x 0 ( s ) x ^ i k + r ( s ) d s .
Now, by the event-triggering condition (7), while t remains prior to the next trigger t k + r + 1 i , we have f i ( t ) < 0 . This implies the following:
e i ( t ) < γ 1 j = 1 N a i j x ^ j k + r ( t ) x ^ i k + r ( t ) + a i 0 x 0 ( t ) x ^ i k + r ( t ) , t ( t k + r i , t k + r + 1 i ) .
Combining the above inequality with (13), for any t [ t k + r i , t k + r + 1 i ) we obtain the following:
γ 1 j = 1 N a i j ( x ^ j k + r ( t ) x ^ i k + r ( t ) ) + a i 0 ( x 0 ( t ) x ^ i k + r ( t ) )   B l K l e A l ( t t k + r i ) × t k + r i t j = 1 N a i j ( x ^ j k + r ( s ) x ^ i k + r ( s ) ) + a i 0 ( x 0 ( s ) x ^ i k + r ( s ) ) d s .
Integrating (14) once more from s = t k + r i to s = t and applying Fubini’s theorem to swap integrals, we get the following:
γ 1 t k + r i t j = 1 N a i j ( x ^ j k + r ( v ) x ^ i k + r ( v ) ) + a i 0 ( x 0 ( v ) x ^ i k + r ( v ) ) d v   t k + r i t B l K l e A l ( s t k + r i ) × t k + r i s j = 1 N a i j ( x ^ j k + r ( v ) x ^ i k + r ( v ) ) + a i 0 ( x 0 ( v ) x ^ i k + r ( v ) ) d v d s B l K l t k + r i t e A l ( s t k + r i ) d s × t k + r i t j = 1 N a i j ( x ^ j k + r ( s ) x ^ i k + r ( s ) ) + a i 0 ( x 0 ( s ) x ^ i k + r ( s ) ) d s .
Because the left-hand side of (15) is exactly γ 1 times the square-bracketed term in (14) integrated over [ t k + r i , t ] . To make this step precise, define the following:
Φ i ( t ) = t k + r i t j = 1 N a i j x ^ j k + r ( v ) x ^ i k + r ( v ) + a i 0 x 0 ( v ) x ^ i k + r ( v ) d v .
The integrand in Φ i ( t ) is a norm and hence nonnegative. In the nontrivial case where a new event is generated, it cannot vanish identically on ( t i , k + r , t ) , so we have Φ i ( t ) > 0 for all t > t i , k + r . Inequality (15) can therefore be rewritten as follows:
γ 1 Φ i ( t ) B l K l t k + r i t e A l ( s t i , k + r ) d s Φ i ( t ) .
γ 1 B l K l t k + r i t e A l ( s t k + r i ) d s .
Now let τ i : = t t k + r i (the time elapsed since the last trigger). The right-hand side of (16) evaluates to B l K l 0 τ i e A l s d s = B l K l e A l τ i 1 A l . Thus (16) simplifies to
B l K l e A l τ i 1 A l γ 1 ,
or equivalently
e A l τ i 1 γ 1 A l B l K l .
Solving for τ i gives the following:
τ i 1 A l ln 1 + γ 1 A l B l K l .
The right-hand side of (17) is a positive constant (depending on system matrices and the chosen γ 1 ) and serves as a uniform lower bound for any inter-event interval of agent i while mode l is active. Importantly, this bound does not depend on t k + r i or the particular trajectory, only on the system and trigger parameters.
Since the mode l was arbitrary and there are finitely many modes, we can take τ min as the minimum of the bounds in (17) over all possible modes l I m . τ min is still positive, and thus every inter-event time τ i is bounded below by τ min > 0 . This proves that agent i cannot trigger infinitely often in any finite time interval, i.e., Zeno behavior is excluded.
The above argument holds for each follower i = 1 , , N ; hence, the event-triggered scheme (6)–(7) guarantees a positive minimum inter-event time for every agent. □
Having established that trigger instants cannot accumulate, we next present conditions under which the proposed control and switching strategy guarantees convergence of all followers to the leader.

3.3. Leader–Follower Consensus Under State-Dependent Event-Triggered Switching

Theorem 2.
Consider the event-triggered switched MASs (1)–(2), Assume that for each mode j I m there exist symmetric matrices
P j > 0 , U j = U j , Y j 1 > 0 , Y j 2 > 0 , W j > 0
such that the following LMIs hold:
sym A c l , j ( I N P j ) + U j ( I N P j ) G j A c l , j Y j 1 W j G j Y j 1 ( Y j 1 + Y j 2 ) = : Ξ ^ j < 0 , W j γ 1 G j γ 1 G j W j trigger S - procedure 0 ,
where A c l , j : = I N A j H B j K j and G j = H B j K j . Then the closed-loop system achieves exponential leader–follower consensus. In particular, there exist constants c > 0 and κ > 0 such that
x i ( t ) x 0 ( t ) c e κ t , t 0 , i = 1 , , N ,
and hence lim t x i ( t ) x 0 ( t ) = 0 for all i = 1 , , N . Moreover, Zeno behavior is excluded by Theorem 1.
Remark 6.
In the above LMIs, the matrix W j 0 is a slack matrix introduced via the S-procedure in order to incorporate the event-triggering constraint into the Lyapunov inequality in a convex way.
More specifically, the triggering condition (see (6)–(7)) can be written in quadratic form with respect to the augmented vector ζ = [ δ , e ] as
ζ 0 γ 1 G j γ 1 G j I ζ 0 ,
which bounds the measurement error e by the disagreement term involving δ . When deriving an upper bound for V ˙ ( t ) , a cross term of the form 2 δ ( I N P j ) G j e appears. The S-procedure allows us to combine this cross term with the above quadratic triggering inequality by adding a weighted version of the trigger matrix to the Lyapunov matrix. After completing the squares, this leads to the LMI in which the block associated with e becomes W j instead of I , with W j 0 treated as an additional decision variable.
Therefore, W j does not represent any physical parameter of the agents; it is a design slack matrix that determines how strongly the triggering condition is used to compensate the δ e coupling in V ˙ . Allowing W j to be free (subject to W j 0 ) increases the degrees of freedom of the LMI and thus helps to reduce the conservatism of the resulting sufficient conditions.
Proof. 
Let σ ( t ) = j q for t [ t q , t q + 1 ) . Set
Δ q = t q + 1 t q , h 1 ( t ) = t t q Δ q , h 2 ( t ) = t q + 1 t Δ q , t [ t q , t q + 1 ) ,
so that h ˙ 1 = Δ q 1 , h ˙ 2 = Δ q 1 and d d t ( h 1 h 2 ) = ( h 2 h 1 ) / Δ q .
Stack the tracking errors as δ = [ δ 1 , , δ N ] and the prediction errors as e = [ e 1 , , e N ] . On the q-th subinterval define the following:
v 1 ( t )   = δ ( t ) I N P j q δ ( t ) + h 1 ( t ) h 2 ( t ) δ ( t ) U j q δ ( t ) ,
v 2 ( t )   = h 1 ( t ) t q t δ ˙ ( s ) Y j q 1 δ ˙ ( s ) d s h 2 ( t ) t t q + 1 δ ˙ ( s ) Y j q 2 δ ˙ ( s ) d s .
Let V ( t ) = v 1 ( t ) + v 2 ( t ) .
Note that the candidate Lyapunov functional used here is precisely the two-sided looped Lyapunov–Krasovskii functional announced in the introduction, adapted to the event-triggered setting. As a consequence, the derivative bounds and the LMIs obtained from this functional hold uniformly for all positive inter-event intervals and do not require specifying any explicit upper bound on their length. From the viewpoint of time-delay systems, each inter-event interval Δ q = t q + 1 t q plays the role of a (possibly time-varying) delay in the feedback loop. The above uniformity means that the conditions in Theorem 2 are delay-range-free: once the LMIs are feasible, exponential consensus is guaranteed for all sequences { Δ q } q 0 that satisfy the positive lower bound implied by Theorem 1, without the need to compute a finite maximum admissible delay.
Because one of h 1 , h 2 vanishes at each endpoint, we have the following:
V ( t q + ) = δ ( t q ) ( I N P j q ) δ ( t q ) , V ( t q + 1 ) = δ ( t q + 1 ) ( I N P j q ) δ ( t q + 1 ) .
We next compute V ˙ ( t ) for t [ t q , t q + 1 ) . Using h ˙ 1 = Δ q 1 , h ˙ 2 = Δ q 1 and the aggregated disagreement dynamics δ ˙ = A c l , j q δ G j q e .
v ˙ 1 ( t ) = 2 δ I N P j q δ ˙ + h 2 ( t ) h 1 ( t ) Δ q δ U j q δ + 2 h 1 ( t ) h 2 ( t ) δ U j q δ ˙ .
Define
I 1 ( t ) = t q t δ ˙ ( s ) Y j q 1 δ ˙ ( s ) d s , I 2 ( t ) = t t q + 1 δ ˙ ( s ) Y j q 2 δ ˙ ( s ) d s .
Then
v ˙ 2 ( t ) = h ˙ 1 ( t ) I 1 ( t ) + h 1 ( t ) δ ˙ Y j q 1 δ ˙ h ˙ 2 ( t ) I 2 ( t ) + h 2 ( t ) δ ˙ Y j q 2 δ ˙ = 1 Δ q I 1 ( t ) + I 2 ( t ) + δ ˙ h 1 ( t ) Y j q 1 + h 2 ( t ) Y j q 2 δ ˙ .
Since Y j q 1 , Y j q 2 > 0 and the integrands are squares, I 1 ( t ) , I 2 ( t ) 0 ; hence
v ˙ 2 ( t ) δ ˙ h 1 ( t ) Y j q 1 + h 2 ( t ) Y j q 2 δ ˙ .
From (21), using δ ˙ = A c l , j q δ G j q e ,
2 δ ( I N P j q ) δ ˙ = 2 δ ( I N P j q ) ( A c l , j q δ G j q e ) = δ A c l , j q ( I N P j q ) + ( I N P j q ) A c l , j q δ 2 δ ( I N P j q ) G j q e .
For the third term in (21), apply Young’s inequality with Y j q 1 + Y j q 2 U j q 0 :
2 h 1 ( t ) h 2 ( t ) δ U j q δ ˙ δ U j q δ + δ ˙ Y j q 1 + Y j q 2 δ ˙ .
Moreover, since | h 2 ( t ) h 1 ( t ) | 1 , we upper-bound the following:
h 2 ( t ) h 1 ( t ) Δ q δ U j q δ δ U j q δ .
Combining (23)–(26) yields the following:
V ˙ = v ˙ 1 + v ˙ 2 δ A c l , j q ( I N P j q ) + ( I N P j q ) A c l , j q + U j q δ 2 δ ( I N P j q ) G j q e + δ ˙ Y j q 1 + Y j q 2 δ ˙ .
The trigger (6)–(7) can be written as the quadratic constraint
δ e 0 γ 1 G j q γ 1 G j q W j q δ e 0 .
By the S-procedure, this implies that for some nonnegative scalar multiplier (absorbed into the design),
2 δ ( I N P j q ) G j q e δ e 0 ( I N P j q ) G j q W j q δ e ,
where W j q 0 is a mode-dependent slack matrix introduced by the S-procedure to encode the triggering inequality and enlarge the feasible LMI region.
Introduce ϕ = [ δ , e , δ ˙ ] . Using (27) and (28), and completing squares for the δ ˙ -dependent part, we obtain the following:
V ˙ ( t ) ϕ Θ 11 Θ 12 Θ 13 Θ 22 Θ 23 Θ 33 Θ j q ϕ ,
with
Θ 11 : = sym A c l , j q ( I N P j q ) + U j q , Θ 12 : = ( I N P j q ) G j q , Θ 13 : = A c l , j q Y j q 1 , Θ 22 : = W j q , Θ 23 : = G j q Y j q 1 , Θ 33 : = ( Y j q 1 + Y j q 2 ) .
Origin of the blocks Θ 11 collects the symmetric part from 2 δ ( I N P j q ) A c l , j q δ together with the U j q term; Θ 12 comes from the δ e cross term 2 δ ( I N P j q ) G j q e ; Θ 13 and Θ 23 are produced when completing squares with the δ ˙ terms weighted by Y j q 1 ; Θ 33 is the negative weight on δ ˙ coming from v 2 . The trigger inequality (6)–(7) is encoded by the S-procedure, which yields the additional LMI constraint (18).
If Ξ ^ j q < 0 in (18), then Θ j q < 0 and hence there exists α j q > 0 such that
V ˙ ( t ) 2 α j q δ ( t ) 2 , t [ t q , t q + 1 ) .
Integrating over [ t q , t q + 1 ) gives the interval-wise contraction
V ( t q + 1 ) e 2 α j q ( t q + 1 t q ) V ( t q + ) .
Effect of the state-dependent switch at t q + 1 . At the global event t q + 1 a subset of agents S q + 1 triggers. For each i S q + 1 the rule selects j + = σ ( t q + 1 + ) minimizing δ i P j δ i at t q + 1 . Since δ is continuous, we have
V ( t q + 1 + ) = i S q + 1 δ i ( t q + 1 ) P j + δ i ( t q + 1 ) + i S q + 1 δ i ( t q + 1 ) P j δ i ( t q + 1 ) i S q + 1 δ i ( t q + 1 ) P j δ i ( t q + 1 ) + i S q + 1 δ i ( t q + 1 ) P j δ i ( t q + 1 ) = V ( t q + 1 ) ,
i.e., V is nonincreasing at switching instants:
V ( t q + 1 + ) V ( t q + 1 ) .
Combining (31) and (32) recursively,
V ( t q + 1 + ) e 2 α j q ( t q + 1 t q ) V ( t q + ) e 2 α ( t q + 1 t q ) V ( t q + ) , α : = min j I m α j > 0 .
Theorem 1 guarantees a uniform positive inter-event lower bound, so the sequence { t q } has no finite accumulation and q ( t q + 1 t q ) = . By induction and using t 0 = 0 , the estimate above yields the following:
V ( t q + ) e 2 α t q V ( 0 ) , q = 0 , 1 , 2 , .
On each flow interval [ t q , t q + 1 ) the derivative inequality V ˙ ( t ) 0 implies V ( t ) V ( t q + ) , hence for all t 0 we obtain the global bound
V ( t ) e 2 α t V ( 0 ) .
In other words, V ( t ) decays to zero exponentially. Since V ( t ) is positive definite with respect to the stacked tracking error δ ( t ) , this shows that δ ( t ) converges to zero exponentially, i.e., x i ( t ) x 0 ( t ) 0 exponentially for all i = 1 , , N . Together with Theorem 1, which guarantees a uniform positive lower bound on the inter-event times and excludes Zeno behavior, this completes the proof. □
Theorems 1 and 2 together guarantee that under the proposed event-triggered control and switching strategy, the closed-loop MAS is both Zeno-free and convergent to consensus. Even if individual agent dynamics are unstable, the cooperative design stabilizes the overall system. In the next section, we would validate these theoretical results with numerical simulations, demonstrating that all followers track the leader and highlighting the communication savings due to event-triggering.

3.4. State-Feedback Gains

Theorem 3.
Let H be the augmented Laplacian in Section 2, and let { λ r } r I + be the set of its strictly positive eigenvalues ( I + { 1 , , N } ; for pinned leader graphs typically I + = { 1 , , N } , for leaderless graphs I + = { 2 , , N } ). For each mode j I m , suppose there exist matrices
P j > 0 , L j R m × n , α j > 0
such that the following convex LMIs hold simultaneously for all λ r { λ r } r I + :
sym A j P j λ r B j L j 2 α j P j .
Then, with the mode-dependent static gains
K j : = L j P j 1 ( j I m ) ,
the disagreement dynamics δ ˙ = ( I N A j H B j K j ) δ is exponentially stable for each active mode j, with decay rate at least α j . Consequently, by fixing K j as above and solving the LMIs in Theorem 2 for { U j , Y j 1 , Y j 2 , W j } , the conditions of Theorem 2 become feasible and the closed-loop event-triggered switched MAS achieves leader–follower consensus while remaining Zeno-free.
Proof. 
Let T be an orthogonal matrix that diagonalizes H : T H T = diag ( λ 1 , , λ N ) with λ r 0 . Applying the transformation δ ˜ = ( T I n ) δ , the aggregated error dynamics decouples into N blocks
δ ˜ ˙ r = ( A j λ r B j K j ) δ ˜ r , r = 1 , , N .
For any r I + and any j I m , define the Lyapunov function V j , r ( δ ˜ r ) : = δ ˜ r P j δ ˜ r . Then
V ˙ j , r = δ ˜ r sym ( A j P j λ r B j L j ) δ ˜ r 2 α j δ ˜ r P j δ ˜ r 2 α j λ min ( P j ) δ ˜ r 2 ,
where we used L j = K j P j and the LMI (33). Hence each disagreement block is exponentially stable with rate α j , so the overall disagreement state δ decays exponentially whenever the mode j is active. Fixing the gains K j obtained above, A c l , j and G j are constant, and Theorem 2 reduces to a convex feasibility problem in the auxiliary variables { U j , Y j 1 , Y j 2 , W j } ; feasibility ensures the quadratic upper bound (25) is negative definite on flows while the S-procedure captures the trigger, yielding consensus and Zeno exclusion as claimed. □
Remark 7
(Common gain option). If a common feedback gain is desired, set P j P > 0 and L j L for all j. Then solve
sym A j P λ r B j L 2 α P , j I m , λ r { λ r } r I + ,
for variables P > 0 , L , α > 0 , and set K : = L P 1 . This yields a single K robust to all modes and all positive Laplacian eigenvalues, at the price of extra conservatism.
Remark 8
(Practical solver setup). In practice one may maximize the worst-case decay margin, e.g., max α min subject to (33) (or (34)), possibly with normalizations like P j I . The obtained K j (or K) is then substituted into Theorem 2 to compute { U j , Y j 1 , Y j 2 , W j } and the trigger threshold γ 1 .

4. Numerical Example

4.1. Example 1

For a switching multi-agent system consisting of one leader and four followers, each agent has two distinct operating modes with the following system matrices, and whose communication topology is shown in Figure 2.
A 1 = 2.0 1.0 0.0 1.5 0.8 0.6 0.0 0.7 0.5 , B 1 = 1 1 1
A 2 = 1.0 1.0 0.5 0.8 1.6 1.0 0.9 0.3 1.5 , B 2 = 1 1 1
Figure 2. Example 1: communication topology of the leader–follower multi-agent system.
A straightforward eigenvalue calculation yields
λ ( A 1 ) = 1.3747 , 0.3373 ± 0.7305 i ,
so A 1 has one eigenvalue with positive real part and is therefore unstable. For the second mode, we obtain
λ ( A 2 ) = 0.4947 , 1.8027 ± 1.0569 i ,
whose real parts are all negative; hence A 2 is a Hurwitz stable matrix. In other words, Example 1 involves one unstable subsystem (mode 1) and one stable subsystem (mode 2).
The communication topology between agents is shown in Figure 1, which includes a directed spanning tree with the leader as the root, solving Theorem 2
In the linear matrix inequality (18), the parameters obtained are as follows:
K 1 = 10.5081 4.1998 2.0607 K 2 = 0.4080 0.2672 0.0569
P 1 = 19.0465 12.0391 6.6166 12.0391 8.0668 3.8095 6.6166 3.8095 2.6662
P 2 = 1.2359 0.9576 0.0551 0.9576 1.4479 0.3025 0.0551 0.3025 0.2905
The initial states of the leader and followers are chosen as
x 0 = 1.0 3.5 1.0 , x 1 = 2.5 5.0 2.0 , x 2 = 3.0 2.0 0.5 , x 3 = 5.0 4.0 2.5 , x 4 = 2.0 1.0 1.5 .
These particular initial states are not specially tuned to facilitate convergence; they are simply taken as distinct, nonzero vectors in R 3 so that the transient responses of the followers and their convergence toward the leader can be clearly visualized in the simulations. According to Theorem 2, once the corresponding LMIs are feasible, the proposed event-triggered scheme guarantees leader–follower consensus for arbitrary initial conditions x i ( 0 ) R 3 , i = 0 , , 4 . For this example, we choose the parameter γ = 0.10 × max i γ i in the triggering condition, which yields a positive minimum inter-event interval τ min = 0.5 ms and thus excludes Zeno behavior.
Figure 3, Figure 4 and Figure 5 depict the trajectories of the tracking error components δ i k ( t ) , k = 1 , 2 , 3 , for all followers. One can see that all error components converge to zero, which confirms that every follower asymptotically tracks the leader under the proposed event-triggered scheme. Figure 6 shows the switching signal σ ( t ) of the closed-loop system, while Figure 7 illustrates the event-triggered instants of all agents. In contrast to a continuous or time-driven (periodic) communication strategy, where information would be exchanged at every sampling instant, the triggering instants in Figure 7 are relatively sparse. From a network perspective, this means that the effective connectivity of the communication graph is reduced in time, leading to a substantial decrease in the number of transmissions and thus an improvement in communication efficiency. At the same time, the convergence of the tracking errors in Figure 3, Figure 4 and Figure 5 demonstrates that the closed-loop consensus performance is well preserved despite this reduced connectivity.
Figure 3. Example 1: trajectories of the first component of the tracking errors δ i 1 ( t ) for the four followers, where δ i 1 ( t ) = x i 1 ( t ) x 0 1 ( t ) .
Figure 4. Example 1: trajectories of the second component of the tracking errors δ i 2 ( t ) for the four followers, where δ i 2 ( t ) = x i 2 ( t ) x 0 2 ( t ) .
Figure 5. Example 1: trajectories of the third component of the tracking errors δ i 3 ( t ) for the four followers, where δ i 3 ( t ) = x i 3 ( t ) x 0 3 ( t ) .
Figure 6. Example 1: switching signal σ ( t ) of the closed-loop switched multi-agent system.
Figure 7. Example 1: event-triggered communication instants of the four followers.
To quantitatively evaluate the communication efficiency of the proposed event-triggered scheme, we compare it with a conventional periodic control strategy in Example 1. The periodic controller samples every h = 0.01 s, resulting in N per = 4 T / h = 6000 updates over the T = 15 s simulation horizon. In contrast, under the proposed asynchronous event-triggered mechanism, the four agents are triggered 57, 45, 69 and 36 times, respectively, yielding a total of N et = 207 updates and an average of N ¯ et = 51.75 per agent. As shown in Table 1, the event-triggered strategy achieves nearly the same convergence performance ( T s 4.9 s) while reducing communication by approximately 96.6%.
Table 1. Comparison between periodic and event-triggered update schemes in Example 1.

4.2. Example 2

Consider a pendulum system used to demonstrate the effectiveness of the switching control strategy, which consists of four followers and one leader. We use the following linearized equation of the pendulum as the ith follower:
m ˜ σ l ˜ σ 2 θ ¨ i = m ˜ σ g θ i u i ,
where θ i represents the deflection angle of the pendulum rod, u i denotes the control torque, l ˜ and m ˜ represent the length and mass of each pendulum respectively, and g = 9.8 m/s2 is the gravitational acceleration constant. In this example, θ i is measured in radians and denotes a small deviation around the downward equilibrium θ i = 0 . With the chosen initial conditions we have | θ i ( 0 ) | 0.2 for all i = 0 , , 4 , so the pendulum angles remain in a small-angle regime throughout the simulation, and the linearized model used above is valid.
The leader’s dynamics are given by the following:
m ˜ σ l ˜ σ 2 θ ¨ 0 = m ˜ σ g θ 0 .
Let x i = col { θ i , θ ˙ i } be the state vector, then the system matrices are as follows:
A 1 = 0 1 g l ˜ 1 0 , B 1 = 0 1 m ˜ 1 l ˜ 1 2 , A 2 = 0 1 g l ˜ 2 0 , B 2 = 0 1 m ˜ 2 l ˜ 2 2 ,
where the selected parameters are m ˜ 1 = 1 kg, m ˜ 2 = 1.2 kg, l ˜ 1 = 1 m, l ˜ 2 = 1.4 m. Consider the communication topology shown in Figure 8.
Figure 8. Example 2: communication topology of the leader–follower pendulum network.
We obtain the controller gain:
K 1 = 31.1642 20.3272 T , K 2 = 58.3917 37.8153 T .
These gains are obtained by solving the LMI-based synthesis conditions in Theorem 3, which enforce a relatively fast convergence rate for the closed-loop pendulum dynamics. For the linearized model
θ ¨ i = g l ˜ σ θ i 1 m ˜ σ l ˜ σ 2 u i ,
the gravitational term g / l ˜ σ is comparatively large, whereas the input channel is scaled by 1 / ( m ˜ σ l ˜ σ 2 ) . As a result, the feedback gains K 1 and K 2 must take relatively large numerical values in order to compensate the gravitational torque and shift the closed-loop eigenvalues sufficiently to the left half-plane. Nevertheless, since the pendulum angles and angular velocities are small in this example, the resulting control torques remain of moderate magnitude in the simulations.
The initial states are chosen as follows:
x 0 = 0.01 0.01 , x 1 = 0.1 0.2 , x 2 = 0.1 0.2 , x 3 = 0.2 0.7 , x 4 = 0.2 0.7 .
Figure 9 shows the switching signals of the agents. Figure 10 depicts the event-triggered instants of the agents, which excludes Zeno behavior. Figure 8, Figure 9, Figure 10 and Figure 11 present the trajectories of the tracking errors between the followers and the leader. From Figure 11 and Figure 12, it is clear that leader–following consensus is achieved under the proposed event-triggered switching scheme.
Figure 9. Example 2: switching signal σ ( t ) of the closed-loop pendulum system.
Figure 10. Example 2: event-triggered communication instants of the four followers.
Figure 11. Example 2: trajectories of the second component of the tracking errors δ i 1 ( t ) for the four pendulum followers, where δ i 1 ( t ) = x i 1 ( t ) x 0 1 ( t ) .
Figure 12. Example 2: trajectories of the second component of the tracking errors δ i 2 ( t ) for the four pendulum followers, where δ i 2 ( t ) = x i 2 ( t ) x 0 2 ( t ) .
To further assess the communication efficiency and scalability of the proposed event-triggered mechanism, we also carry out a second simulation in Example 2. The simulation horizon is T = 8 s. For the conventional periodic control strategy with sampling period h = 0.01 s, this leads to N per , 2 = 4 T / h = 3200 control/communication updates. In contrast, under the proposed asynchronous event-triggered scheme, the four agents are triggered 33, 71, 55 and 45 times, respectively, yielding a total of N et , 2 = 204 updates and an average of N ¯ et , 2 = 51.00 per agent. As shown in Table 2, the event-triggered strategy achieves almost the same convergence performance as the periodic controller, while reducing the overall communication load by approximately 93.6 % .
Table 2. Comparison between periodic and event-triggered update schemes in Example 2.

5. Conclusions

This paper has investigated the leader–follower consensus problem for switched multi-agent systems with dynamically changing modes under the combined action of event-triggered control and state-dependent switching signals. The proposed event-triggered mechanism not only reduces the communication and computation burden among neighboring agents, but also improves the overall performance of the system. It is rigorously proved that the designed event-triggered control scheme guarantees a strictly positive dwell time between any two consecutive triggering instants for all agents and modes, which completely excludes the Zeno phenomenon. In addition, a more general switching rule is introduced, namely a state-dependent switching signal updated only at event-triggered instants. This rule breaks through the limitations of traditional time-dependent switching laws and allows leader–follower consensus to be achieved even when all subsystems are unstable.
From a practical point of view, the numerical examples demonstrate that the proposed event-triggered leader–follower switching scheme can achieve consensus with a substantial reduction of control and communication updates when compared with a conventional periodic implementation. For the representative scenarios considered in Section 4, the total number of updates is reduced by more than ninety percent while the convergence speed remains almost unchanged. These results indicate that the proposed framework is promising for resource-limited networked control applications in which bandwidth and energy consumption are critical concerns. At the same time, the present study is subject to several constraints. The analysis is carried out for continuous-time linear multi-agent systems with known dynamics, under a fixed connected communication topology and ideal communication channels without delays, packet dropouts, or measurement noise. Moreover, only state-feedback controllers are considered, and the LMI conditions are derived for a specific class of looped Lyapunov–Krasovskii functionals. Extending the framework to nonlinear or uncertain agent dynamics, switching or time-varying topologies, output-feedback settings, and more general communication imperfections will be an interesting direction for future research.

Author Contributions

Conceptualization, J.Z., X.L. and J.S.; methodology, X.L., T.W. and H.W.; writing—original draft preparation, J.Z. and X.L.; writing—review and editing, X.L., T.W., H.W. and J.S.; supervision, X.L. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Qianyi Technology (Changchun) Co., Ltd. (Grant Number RES0008506).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that this study was funded by Qianyi Technology (Changchun) Co., Ltd. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

References

  1. Oh, K.K.; Park, M.C.; Ahn, H.S. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  2. Balbo, F.; Mandiau, R.; Zargayouna, M. Extended review of multi-agent solutions to Advanced Public Transportation Systems challenges. Public Transp. 2024, 16, 159–186. [Google Scholar] [CrossRef]
  3. Hespanha, J.P.; Naghshtabrizi, P.; Xu, Y. A survey of recent results in networked control systems. Proc. IEEE 2007, 95, 138–162. [Google Scholar] [CrossRef]
  4. Wang, Y.; Li, Z.; Zhang, H. Cooperative control of multi-missile systems. IET Control Theory Appl. 2015, 9, 1833–1840. [Google Scholar] [CrossRef]
  5. Nedić, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
  6. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef]
  7. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  8. Moreau, L. Stability of multiagent systems with time-dependent communication links. IEEE Trans. Autom. Control 2005, 50, 169–182. [Google Scholar] [CrossRef]
  9. Su, H.; Liu, Y.; Zeng, Z. Second-order consensus for multiagent systems via intermittent sampled position data control. IEEE Trans. Cybern. 2019, 50, 2063–2072. [Google Scholar] [CrossRef]
  10. Xie, G.; Liu, H.; Wang, L.; Jia, Y. Consensus in networked multi-agent systems via sampled control: Fixed topology case. In Proceedings of the 2009 American Control Conference, St. Louis, MO, USA, 10–12 June 2009; pp. 3902–3907. [Google Scholar] [CrossRef]
  11. Yu, W.; Zhou, L.; Yu, X.; Lü, J.; Lu, R. Consensus in multi-agent systems with second-order dynamics and sampled data. IEEE Trans. Ind. Inform. 2012, 9, 4. [Google Scholar] [CrossRef]
  12. Xiao, F.; Chen, T. Sampled-data consensus for multiple double integrators with arbitrary sampling. IEEE Trans. Autom. Control 2012, 57, 3230–3235. [Google Scholar] [CrossRef]
  13. Årzén, K.E. A simple event-based PID controller. IFAC Proc. Vol. 1999, 32, 8687–8692. [Google Scholar] [CrossRef]
  14. Ding, L.; Han, Q.L.; Ge, X.; Zhang, X.M. An overview of recent advances in event-triggered consensus of multiagent systems. IEEE Trans. Cybern. 2017, 48, 1110–1123. [Google Scholar] [CrossRef] [PubMed]
  15. Sun, W.; Wu, J.; Su, S.F.; Zhao, X. Neural network-based fixed-time tracking control for input-quantized nonlinear systems with actuator faults. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 3978–3988. [Google Scholar] [CrossRef]
  16. Ma, Y.; Zhao, J. Distributed event-triggered consensus using only triggered information for multi-agent systems under fixed and switching topologies. IET Control Theory Appl. 2018, 12, 1357–1365. [Google Scholar] [CrossRef]
  17. Zou, W.; Shi, P.; Xiang, Z.; Shi, Y. Consensus tracking control of switched stochastic nonlinear multiagent systems via event-triggered strategy. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1036–1045. [Google Scholar] [CrossRef]
  18. Xing, M.L.; Deng, F.Q. Tracking control for stochastic multi-agent systems based on hybrid event-triggered mechanism. Asian J. Control 2019, 21, 2352–2363. [Google Scholar] [CrossRef]
  19. Ruan, X.; Feng, J.; Xu, C.; Wang, J. Observer-based dynamic event-triggered strategies for leader-following consensus of multi-agent systems with disturbances. IEEE Trans. Netw. Sci. Eng. 2020, 7, 3148–3158. [Google Scholar] [CrossRef]
  20. Donganont, M. Scaled consensus of hybrid multi-agent systems via impulsive protocols. J. Math. Comput. Sci. 2025, 36, 275–289. [Google Scholar]
  21. Donganont, M. Leader-following finite-time scaled consensus problems in multi-agent systems. J. Math. Comput. Sci. 2025, 38, 464–478. [Google Scholar]
  22. Park, C.; Donganont, S.; Donganont, M. Achieving Edge Consensus in Hybrid Multi-Agent Systems: Scaled Dynamics and Protocol Design. Eur. J. Pure Appl. Math. 2025, 18, 5549. [Google Scholar] [CrossRef]
  23. Jia, H.; Zhao, J. Output regulation of switched linear multi-agent systems: An agent-dependent average dwell time method. Int. J. Syst. Sci. 2016, 47, 2510–2520. [Google Scholar] [CrossRef]
  24. He, G.; Zhao, J. Leader-following consensus for switched uncertain multi-agent systems with delay time under directed topology. In Proceedings of the 2023 42nd Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023; pp. 1007–1011. [Google Scholar] [CrossRef]
  25. Wang, X.; Zhao, J. Event-triggered control for switched linear systems: A control and switching joint triggering strategy. ISA Trans. 2022, 122, 380–386. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, S.; Zhao, J. Event-triggered-based switching law design for switched systems. Nonlinear Dyn. 2024, 112, 19985–19998. [Google Scholar] [CrossRef]
  27. Ma, Y.; Zhao, J. Distributed integral-based event-triggered scheme for cooperative output regulation of switched multi-agent systems. Inf. Sci. 2018, 457, 208–221. [Google Scholar] [CrossRef]
  28. Ma, Y.; Zhao, J. Distributed adaptive integral-type event-triggered cooperative output regulation of switched multiagent systems by agent-dependent switching with dwell time. Int. J. Robust Nonlinear Control 2020, 30, 2550–2569. [Google Scholar] [CrossRef]
  29. He, G.; Zhao, J. Fully distributed event-triggered cooperative output regulation for switched multi-agent systems with combined switching mechanism. Inf. Sci. 2023, 638, 118970. [Google Scholar] [CrossRef]
  30. Zhu, W.; Jiang, Z.P. Event-based leader-following consensus of multi-agent systems with input time delay. IEEE Trans. Autom. Control 2014, 60, 1362–1367. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.