You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Feature Paper
  • Article
  • Open Access

13 November 2025

Game-Based Consensus of Switching Multi-Agent Systems

,
,
and
Institute of Complexity Science, College of Automation, Qingdao University, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Analysis and Applications of Control Systems Theory

Abstract

This paper investigates the leader–follower consensus problem for a class of second-order multi-agent systems. These systems are composed of both discrete-time and continuous-time subsystems and are governed by switching dynamics. Within the framework of a fixed directed topology involving multiple leaders, two control strategies are formulated. One applies separate control protocols to the continuous and discrete subsystems, while the other adopts an unified sampled-data control protocol. First, a multi-player game model is established based on the analysis and simulation of conflict behaviors among agents, and the existence of a unique Nash equilibrium(NE) for the system is proven. Then, based on the Nash equilibrium, a continuous–discrete-time game-based switching control system is formulated. Furthermore, the results confirm that the proposed system achieves consensus under both control strategies, even under arbitrary switching patterns. Finally, the performance of the approach is verified through numerical simulations.

1. Introduction

During the past few years, increasing attention has been devoted to multi-agent systems (MASs), which consist of autonomous agents cooperating through interaction to complete tasks []. Among studies on MASs (e.g., controllability [,,], stabilizability [,], containment control [,], leader-following consensus [,,]), consensus remains one of the primary research directions.
The concept of consensus was first introduced by DeGroot [], who proposed a model demonstrating how agents can reach an agreement in parameter estimation. This work laid the foundation for subsequent research on consensus in MASs. Olfati-Saber [] studied the problem of average consensus in connected undirected graphs and extended it to cases involving time delays and switched topologies. The work of Ren and Beard [] demonstrated that for multi-agent systems under directed topologies, the existence of a spanning tree in the communication graph is the fundamental criterion for achieving consensus. Liu and Ji [] introduced a flexible dynamic event-triggering approach related to the triggering threshold and addressed consensus in MASs with general linear dynamics, resulting in significantly reduced communication requirements. It is worth noting that the above results are only applicable to linear systems. A significant advancement was made by Liu et al. [], who derived consensus criteria for nonlinear systems subject to both fixed and switching topologies. For MASs with communication topologies involving negative weights, Tian et al. [] established a unified framework. Their work provides necessary and sufficient conditions for consensus under such conditions.
The aforementioned studies concentrated exclusively on MASs with a single type of dynamics and did not consider the more prevalent hybrid MASs encountered in practical engineering applications, which comprise two interacting components: subsystems with discrete-time (DT) dynamics [] and subsystems with continuous-time (CT) dynamics. Numerous instances of such systems exist [,]; consequently, investigating the coordinated control of hybrid MASs is of substantial practical significance and application value. For example, in human–robot systems, commands from humans are discrete, while the movements of the robot are continuous []. Zheng et al. [,] investigated the consensus problem in first-order and second-order hybrid MASs under fixed directed communication topologies. By analyzing various communication strategies between DT and CT agents, they established a foundational framework for hybrid systems interactions.
In practical scenarios, various hybrid systems could be formulated as switching systems, wherein the dynamical behavior of agents may switch arbitrarily between DT and CT dynamics. Such switching behavior commonly arises in applications including power systems and automatic speed regulation systems. For instance, a system might be controlled by either physical or digital regulators, governed by switching rules between these regulators. When addressing the issue of long sampling periods, a discretized model of CT dynamics is considered; whereas for high-precision and short-sampling-period systems, it is necessary to handle time-varying values. Consequently, the system functions as a switching system that coordinates continuous-time and discrete-time dynamics. Zhai et al. [] conducted a stability analysis of continuous–discrete systems using a Lie algebraic technique. On the basis of this, Zheng et al. [] used classic protocol control methods to achieve consistency under switching dynamics. In addition, Lin et al. [] also conducted research on finite time control and quantified control to achieve consensus. Liu et al. [,] proposed two control protocols for the consensus control of second-order MASs concerning switching dynamics, and provided several sufficient and necessary conditions to guaranteeing consensus regarding network topology and sampling period.
As the demand for reducing energy consumption in MASs continues to grow in practical applications, game theory has been applied for distributed control of MASs. Each agent in MASs functions as an independent player, making individual decisions based on local information exchange with its neighbors. By defining utility functions for the agents and assuming they adjust their behavior to optimize their utility, the collaborative task can be modeled as a distributed game.Ma et al. [] analyzed games between CT and DT MASs, establishing sufficient/necessary conditions for consensus in the resulting hybrid system and proposing a method to accelerate convergence. Wang et al. [] broadened the result to the more general scenario of an arbitrary number of interacting subsystems. A data-driven approach was employed by Xie et al. [] to address the multi-player nonzero-sum game in unknown linear systems, enabling the derivation of dynamic output feedback Nash strategies using only input–output data. Motivated by the aforementioned works, this paper examines the consensus problem for second-order MASs with switching dynamics under a leader–follower framework, using a game-theoretic approach. In contrast to existing game-theoretic approaches [,,], our work is distinguished by its integration of switching CT-DT dynamics with a non-cooperative game among multiple leaders. While prior studies primarily focus on systems with a single type of dynamics or cooperative interactions, this study explicitly models the strategic competition between leaders under arbitrary switching patterns, establishing a unique framework that bridges game theory and hybrid system control.
Relative to the existing literature, the principal contributions of our work are delineated as follows.
  • The competitive behaviors among multiple leaders in MASs are modeled as a non-cooperative game, within which the individual cost function for each player is designed. Solving this game reveals that it admits a unique Nash equilibrium solution.
  • Based on the formulated game, two control strategies are developed for consensus attainment. One employs distinct CT and DT inputs for the respective subsystems, while the other utilizes a unified sampled-data control protocol for both, thus avoiding controller frequent switching.
  • Two sufficient and necessary conditions are established for the proposed control protocols.
The proposed framework finds potential applications in several real-world domains where hybrid continuous–discrete dynamics and competitive interactions coexist. For instance, in smart grid systems, multiple distributed energy resources (leaders) may compete to optimize their own profits while coordinating with local controllers (followers) to maintain frequency and voltage stability—a scenario naturally captured by our non-cooperative game model. Similarly, in multi-robot collaborative systems, such as warehouse logistics or search-and-rescue missions, robots may switch between continuous motion and discrete decision-making modes, while competing for tasks or resources. Our unified sampled-data control protocol can effectively avoid frequent controller switching and ensure consensus under dynamic topologies. These examples illustrate the practical relevance and generality of our theoretical results.
The paper is organized as follows. Section 2 introduces the necessary preliminaries. The main theoretical results are developed in Section 3 and Section 4. Section 5 presents numerical simulations to illustrate the findings. The work is concluded in Section 6.
Some notations and symbols are listed in Table 1.
Table 1. Notation Summary.

2. Preliminaries

2.1. Graph Theory

Consider a directed graph G = ( V , E , A ) , defined by a node set V = { v 1 , , v N } , a set of directed edges E V × V and an adjacency matrix A = [ a i j ] R N × N . A directed edge e i j in G is represented as the ordered pair ( v i , v j ) , indicating the direction of information flow from v j to v i . The adjacency matrix satisfies: a i j > 0 if e i j E , else a i j = 0 ; a i i = 0 ; and a i j = a j i if the graph is undirected. The Laplacian matrix L = [ l i j ] R N × N of G has entries defined by
l i j = j N i a i j , if i = j , a i j , if i j .
A graph is connected when a path exists between every pair of distinct nodes. A tree is a connected subgraph that contains no cycles. A directed spanning tree of G is a directed tree that contains every node in G. A directed spanning forest comprises multiple directed spanning trees that share no common vertices.

2.2. System Model

Consider a MAS with second-order switching dynamics, consisting of n followers and m leaders, labeled as V = { 1 , 2 , , n + m } , where n + m N + . The set of followers is denoted as V f = { v 1 , v 2 , . . . , v n } , and the set of leaders is denoted as V l = { v n + 1 , v n + 2 , . . . , v n + m } . In a MAS considering second-order switching dynamics, all agents are subject to both CT and DT switching dynamics. The follower state of the CT dynamical system is described as
x ˙ i c t = v i c t , v ˙ i c t = u i c t , i = 1 , 2 , , n ,
the leader state of the CT dynamical system is expressed as
x ˙ i c t = v i c t , i = 1 , 2 , , n , v ˙ i c t = 0 , i = n + 1 , n + 2 , , n + m ,
the follower state of DT dynamical systems is represented as
x i d t k + 1 = x i d t k + v i d t k , v i d t k + 1 = v i d t k + u i d t k ,
the leader state of DT dynamical systems is set to
x i d t k + 1 = x i d t k + v i d t k , v i d t k + 1 = v i d t 0 ,
For agent i, its position x i ( · ) , velocity v i ( · ) , and control input u i ( · ) are all real-valued states, i.e., x i ( · ) , v i ( · ) , u i ( · ) R . The superscripts c and d indicate that agent i is part of the CT and DT dynamical systems, respectively. v i ( t 0 ) , i V l represents the initial velocity state of the leaders. Since the leader states are unaffected by the other agents, we assume in this paper that the velocity state of the leaders at any time is the same as the initial value. We use a control protocol for CT systems, where each agent constantly observes its own state and the states of its neighbors. The control input is given by
u i c ( t ) = β j N i a i j v j ( t ) v i ( t ) + α j N i a i j x j ( t ) x i ( t ) ,
the control protocol that each agent can observe their own state and neighboring states at the sampling time is used for DT systems, the corresponding control input is defined as
u i d ( t k ) = β j N i a i j v j ( t k ) v i ( t k ) + α j N i a i j x j ( t k ) x i ( t k ) ,
where α , β > 0 are the coupling gains.
Thus, systems (1) and (3) can be written as
x ˙ i c t = v i c t , v ˙ i c t = β j N i a i j v j t v i t + α j N i a i j x j t x i t , i = 1 , 2 , , n ,
and
x i d t k + 1 = v i d t k + x i d t k , v i d t k + 1 = v i d t k + β j N i a i j v j t k v i t k + α j N i a i j x j t k x i t k .
For the systems (1) and (2), let ζ i ( t ) = x i ( t ) , v i ( t ) T . Then, the states of followers and leaders are respectively given as follows
ζ ˙ i t = A ζ i t j = 1 n l i j B ζ i t , i = 1 , 2 , , n ,
ζ ˙ i t = A ζ i t , i = n + 1 , n + 2 , , n + m .
where A = 0 1 0 0 and B = 0 0 α β . Within the leader–follower framework, L is structured as L = L f L f l L f l T L l , where L f R n × n and L l R m × m are the submatrices associated with the followers and leaders, respectively. L f l R n × m , and L f l T R m × n are the connection matrix between leaders and followers.
Let ζ ( t ) = ζ 1 T t , ζ 2 T t , , ζ n + m T t T , then (1) and (2) can be characterized as
ζ ˙ t = I n A 0 n × m 0 m × n I m A L f B L f l B 0 m × n 0 m × m ζ t ,
simplifying it yields
ζ ˙ t = I n + m A L B ζ t .
For the DT systems (3) and (4), by applying a similar transformation, one obtains
ζ t k + 1 = I n + m C L D ζ t k ,
where ζ t k = ζ 1 T t k , ζ 2 T t k , . . . , ζ n + m T t k T , ζ i t k = x i t k , v i t k T , C = 1 1 0 1 , D = 0 0 α β .
Lemma 1
([]). M = ( m i j ) R n × n is regarded as strictly diagonally dominant (SDD) if m i i > j = 1 , j i n m i j , i V . The SDD matrix is invertible.
Lemma 2
([]). A quadratic complex coefficient polynomial is given as
g s = s 2 + ω 1 s + ω 2 ,
where ω 1 and ω 2 are complex numbers. If g ( s ) satisfies R e ω 1 > 0 and R e ω 1 I m ω 1 I m ω 2 + R e 2 ω 1 R e ω 2 I m 2 ω 2 > 0 , then it is called Hurwitz stable.
Lemma 3
(Gersghorin Circle Theorem []). Let B = ( b i j ) R n × n and let λ 1 , λ 2 , , λ n denote its eigenvalues. Then λ i i = 1 n G i , i 1 , 2 , , n , where G i = z | z b i i | j = 1 , j i n | b i j | .
Lemma 4
([]). For a stochastic matrix S = ( s i j ) R n × n , the eigenvalue 1 n always exists with corresponding eigenvector 1 n . The absolute values of all other eigenvalues lie within the unit circle.

2.3. Conceptualization and Representation in Game Theory

We consider a CT-DT switching dynamic system with n follower players and m leader players. The set of follower players is defined as P f = { p 1 , p 2 , , p n } , and the set of leader players is defined as P l = { p n + 1 , p n + 2 , , p n + m } . The state of player p i , i = 1 , 2 , , n + m at the game moment t = t k is defined as ζ i ( t ) R . The competitive behavior among agents in the switching system is cast as a multi-player game, denoted by G ( P , S i , C i ) , which is expressed as follows.
Players: The set of all players is defined as P = P f P l .
Strategy: All players are assumed to make their strategy choices independently and simultaneously. Each player p i , i P , selects a strategy from its own strategy set S i R , and competes against its opponents with the objective of minimizing its own cost function f i ( · ) . Define t as the moment right before the game (i.e., the instantaneous time prior to the strategy update at t). If a player changes its state ζ i ( t ) through selecting a strategy, the player will have reached the state ζ i ( t ) just before time t .
Cost: The leader players p i , i P l are assumed to be upper-level decision-makers, who compete exclusively with other leaders, not with followers. Its cost function is governed by the cost of changing one’s state ζ i ( t k + 1 ) ζ i ( t k + 1 ) T ζ i ( t k + 1 ) ζ i ( t k + 1 ) and the cost of divergence with other leader players 1 q i j N i h i j [ ζ i ( t k + 1 ) ζ j ( t k + 1 ) ] T [ ζ i ( t k + 1 ) ζ j ( t k + 1 ) ] . Therefore, the cost function of the leader player p i , i P l is defined as
f i t k + 1 = κ i ζ i t k + 1 ζ i t k + 1 T ζ i t k + 1 ζ i t k + 1 + ϑ i 1 q i j N i h i j ζ i t k + 1 ζ j t k + 1 T ζ i t k + 1 ζ j t k + 1 ,
where κ i and ϑ i represent the cost weight of the change in its own state and the cost weight of divergence with other players, respectively, q i = j N i h i j > 0 , 0 < κ i , ϑ i < 1 and κ i + ϑ i = 1 . In addition, the follower players p i , i P f do not engage in competitive behavior among leader players but instead follow the directives of the leaders and adjust their own states subject to the instructions. Consequently, the cost function of the follower players p i , i P f is defined as
f i t k + 1 = ζ i t k + 1 ζ i t k + 1 T ζ i t k + 1 ζ i t k + 1 .
Definition 1.
In a multi-player game G , there exists a Nash equilibrium solution if it satisfies
f i s i , s i f i s i , s i , s i S i , i P ,
where the cost f i ( s i , s i ) for player p i depends on their strategy s i S i and the strategy profile s i of other players. Thus, no player gains from unilaterally deviating from the equilibrium s .
Remark 1.
All players are rational and selfish. They seek to minimize their state changes and reduce discrepancies with their opponents, which is applicable in many practical situations. For example, in tariff negotiations, each country aims to uphold its tariffs while reaching a consensus with other countries, requiring a balance between its own interests and discrepancies with others. In social networks, individuals typically maintain their own viewpoints while seeking consensus, necessitating compromises to balance personal preferences with group alignment.

3. Main Results

Consider a CT-DT switching dynamic MAS on a directed graph G . Assume that each agent in the system satisfies q i = j N i h i j > 0 , and that the interactions among these agents are modeled as a game G ( P , S i , C i ) . The cost functions of the leader players and the follower players correspond to Equations (13) and (14), respectively. Let the state set of followers be ζ f t k = ζ 1 t k , ζ 2 t k , . . . , ζ n t k T and the state set of leaders be ζ l t k = ζ n + 1 t k , ζ n + 2 t k , . . . , ζ n + m t k T . And ζ t k = ζ f t k , ζ l t k T .
Theorem 1.
The game G ( P , S i , C i ) has a unique NE ζ 1 t k + 1 , . . . , ζ i t k + 1 , ζ n + m t k + 1 with ζ ( t k + 1 ) = Ξ ˜ 1 K ˜ ζ ( t k + 1 ) holding, where
Ξ ˜ = I n Ξ , Ξ = I m Q H , Q = d i a g ϑ 1 q 1 , ϑ 2 q 2 , . . . , ϑ m q m , H = h 11 h 12 h 1 m h 21 h 22 h 2 m h m 1 h m 2 h m m and K ˜ = I n K , K = d i a g κ 1 , κ 2 , . . . , κ m .
Proof of Theorem 1.
For p i , i P f , f i ζ i t k + 1 has a global minimum solution ζ i ( t k + 1 ) , which satisfies
f i ζ i t k + 1 ζ i t k + 1 | ζ 1 t k + 1 , , ζ i t k + 1 , , ζ n t k + 1 = 0 , 2 f i ζ i t k + 1 2 ζ i t k + 1 | ζ 1 t k + 1 , , ζ i t k + 1 , , ζ n t k + 1 > 0 ,
for p i , i P l , if ζ j ( t k + 1 ) , j P l , j i is fixed, obviously f i ζ i t k + 1 , ζ j t k + 1 is a quadratic function with respect to ζ i ( t k + 1 ) , thus f i ζ i t k + 1 , ζ j t k + 1 has a global minimum solution ζ i ( t k + 1 ) satisfying
f i ζ i t k + 1 , ζ j t k + 1 ζ i t k + 1 ζ n + 1 t k + 1 , , ζ i t k + 1 , , ζ n + m t k + 1 = 0 , 2 f i ζ i t k + 1 , ζ j t k + 1 2 ζ i t k + 1 ζ n + 1 t k + 1 , ζ i t k + 1 , , ζ n + m t k + 1 > 0 .
According to (15)–(17), the game G ( P , S i , C i ) has a unique NE solution. Furthermore, ζ 1 t k + 1 , , ζ i t k + 1 , ζ n + m t k + 1 is the NE solution if and only if
f i ζ i t k + 1 ζ i t k + 1 | ζ 1 t k + 1 , , ζ i t k + 1 , , ζ n t k + 1 = 0 , i P f , f i ζ i t k + 1 , ζ j t k + 1 ζ i t k + 1 | ζ n + 1 t k + 1 , , ζ i t k + 1 , , ζ n + m t k + 1 = 0 , i P l .
By solving (18), one obtains
ζ i t k + 1 = ζ i t k + 1 , i P f , ζ i t k + 1 ϑ i q i j N i h i j ζ j t k + 1 = κ i ζ i t k + 1 , i P l .
Let H = h 11 h 12 h 1 m h 21 h 22 h 2 m h m 1 h m 2 h m m with h 11 = h 22 = = h m m = 0 . The Equation (19) can be written in matrix form as
I n Ξ ζ f t k + 1 ζ l t k + 1 = I n K ζ f t k + 1 ζ l t k + 1 ,
where Ξ = I m Q H R ( n + m ) × ( n + m ) , Q = d i a g ϑ 1 q 1 , ϑ 2 q 2 , . . . , ϑ m q m and K = d i a g κ 1 , κ 2 , . . . , κ m R ( n + m ) × ( n + m ) . For convenience, let Ξ ˜ = I n Ξ R m × m , K ˜ = I n K R m × m , then (20) could be rewritten as Ξ ˜ ζ t k + 1 = K ˜ ζ t k + 1 . Observing Ξ ˜ = e i j R ( n + m ) × ( n + m ) , it can be found that
j = 1 , j i n + m e i j < 1 = e i i , i P ,
which implies that Ξ ˜ is strictly diagonally dominant. By Lemma 1 the Ξ ˜ is invertible. Therefore, we have
ζ t k + 1 = Ξ ˜ 1 K ˜ ζ t k + 1 .
As a consequence, game G ( P , S i , C i ) has the unique Nash equilibrium ζ 1 t k + 1 , . . . , ζ i t k + 1 , ζ n + m t k + 1 with ζ ( t k + 1 ) = Ξ ˜ 1 K ˜ ζ ( t k + 1 ) holding. □
This theorem establishes the existence of a unique Nash Equilibrium (NE) in the game involving multiple leaders and followers. It signifies that each leader can find an optimal balance between minimizing its own state deviation and reducing disagreements with other leaders. The followers, in contrast, do not engage in competition and simply adjust their states according to the leaders’ directives. The existence of this unique NE not only provides a stable optimization objective for the system but also forms a solid foundation for designing the subsequent consensus control protocols.
All agents participate in the game G ( P i , S i , C i ) , and adjust their states at the next time step based on the NE. Under the framework of the game, embedding the CT system (11) into (21) yields
ζ ˙ ( t ) = Ξ ˜ 1 K ˜ I n + m A L B ζ t = I n 1 0 0 Ξ 1 I n 0 0 K I n A L f B L f l B 0 I m A ζ ( t ) = I n A L f B L f l B 0 Ξ 1 K A ζ ( t ) = I n A 0 Ξ 1 K A L f B L f l B 0 0 ζ ( t ) = Ψ A L B ζ t ,
where Ψ = I n Ξ 1 K . Similarly, embedding the DT system (12) into (21) yields
ζ t k + 1 = Ψ C L B ζ t k .
By analyzing the matrix Ξ 1 K = c i j R m × m , we can conclude that
Ξ 1 m = 1 ϑ 1 q 1 h 12 ϑ 1 q 1 h 1 m ϑ 2 q 2 h 21 1 ϑ 2 q 2 h 2 m ϑ m q m h m 1 ϑ m q m h m 2 1 1 1 1 = 1 ϑ 1 q 1 j N n + 1 h ( n + 1 ) j 1 ϑ 2 q 2 j N n + 2 h ( n + 2 ) j 1 ϑ m q m j N n + m h ( n + m ) j = 1 ϑ 1 1 ϑ 2 1 ϑ m ,
and K 1 m = d i a g κ 1 , κ 2 , , κ m 1 m = κ 1 , κ 2 , , κ m T . Since the weight factors satisfy ϑ i + κ i = 1 , we have K 1 m = 1 ϑ 1 , 1 ϑ 2 , . . . , 1 ϑ m T . Thus, we obtain
Ξ 1 K 1 m = 1 m .
According to the foregoing, it can be known that Ξ = I m Q H , assume that the eigenvalues of Q H = b i j R m × m are λ i , i = n + 1 , n + 2 , , n + m . Based on Lemma 3, we obtain λ i j = n + 1 n + m G i , where
G i = c | | c b i i | j = n + 1 , j i n + m | b i j | = c | | c | j = n + 1 , j i n + m | ϑ i q i h i j | = c | | c | ϑ i < 1 .
It means λ i < 1 .Thus
Ξ 1 = I m Q H 1 = n = 0 Q H n = I n + Q H + Q H 2 + .
Clearly, each element of the matrix Q H is nonnegative. Accordingly, the inverse matrix Ξ 1 is nonnegative and possesses positive diagonal entries. According to (25) and (26), Ξ 1 K is a stochastic matrix with positive diagonal elements. By Lemma 4, the eigenvalue 1 n corresponds to the eigenvector 1 m , while the other eigenvalues satisfy λ i < 1 .
The Jordan canonical form of L associated with the directed graph G is defined as J = diag ( J 1 , J 2 , , J r ) , where Jordan block J d is given by
J d = λ d 1 0 0 λ d 1 0 0 0 λ d N d × N d ,
where λ d is an eigenvalue of L with algebraic multiplicity N d , d = 1 , 2 , , r . Moreover, these multiplicities satisfy N 1 + N 2 + + N r = n + m .
Let J denote the Jordan canonical form of L, such that L = P J P 1 . The columns of the nonsingular matrix P comprise the right eigenvectors of L, and the rows of P 1 comprise the left eigenvectors. Letting ξ ( t ) = P 1 I 2 ζ ( t ) , the CT game control system (22) is cast to
ξ ˙ ( t ) = I 2 P 1 Ψ A L B ζ t = A Ψ P 1 B J P 1 ζ t = A Ψ B J ξ ( t ) .
By the system (23), through a similarity transformation, we obtain the DT game control system as
ξ t k + 1 = Ψ C J D ξ t k .
Lemma 5
([]). Assume that the directed graph G contains a directed spanning tree, the CT-DT switching dynamic MAS can achieve consensus if lim t ξ i t = 0 ,   lim k ξ i ( t k ) = 0 , where i = 2 , 3 , , n + m .
We have made the following assumptions.
A1
The graph G admits a directed spanning forest.
A2
The control gain simultaneously satisfies conditions (29) and (30).
β 2 α > max u i ( L ) Im 2 u i Re u i u i 2 , κ i 2 ϑ i > max ε i Ξ 1 K Re ε i Im 2 ε i ,
4 Re u i > u i 2 2 β α , α + β u i 2 α 2 α β 2 β 2 u i 2 + 4 Re u i > 4 Im 2 u i α ,
where u i is the eigenvalue of L, ϵ i is the eigenvalue of the game control matrix Ξ 1 K .
Theorem 2.
Consensus is achieved in the CT-DT switching MAS (27)–(28) if and only if A1 and A2 are satisfied.
Proof of Theorem 2.
(Sufficiency) Let Γ = Ψ A J B , Π = Ψ C J B , then (27) and (28) could be expressed as ξ ˙ ( t ) = Γ ξ ( t ) ,   ξ t k + 1 = Π ξ t k . Note that Γ Π = Π Γ , for any initial state ξ ( t 0 ) , we have
ξ t = e Γ t Π k ξ t 0 .
Since J = P 1 L P , where P is a nonsingular matrix. Hence, the characteristic polynomial of Γ is
det λ I 2 ( n + m ) Γ = det λ I 2 n I n A + J f B 0 2 n × 2 m 0 2 m × 2 n λ I 2 m Ξ 1 K A = i = 1 n det λ i 1 u i α λ i + u i β j = n + 1 n + m det λ j ε j 0 λ j = i = 1 n det λ i 2 + λ i u i β + u i α j = n + 1 n + m det λ j 2 + ε j = i = 1 n f i c λ j = n + 1 n + m g i c λ ,
where f i c λ i = λ i 2 + λ i u i β + u i α , g i c λ i = λ j 2 + ε j . Substituting λ = i ϖ into f i c ( λ ) and g i c ( λ ) , we obtain
f i c i ϖ = ϖ 2 + β Re u i + i Im u i i ϖ + α Re u i + i Im u i
and
g i c i ϖ = ϖ 2 + Re ε i + i Im ε i .
The determinant of the block matrix is computed as follows. Since the matrix is block upper triangular, we have
det ( λ I Γ ) = det ( λ I 2 n ( I n A J f B ) ) · det ( λ I 2 m ( Ξ 1 K A ) ) .
For the first term, corresponding to the follower subsystem, we note that J f is block diagonal (Jordan form of L f ), so
det ( λ I 2 n ( I n A J f B ) ) = i = 1 n det λ I 2 0 1 0 0 + u i 0 0 α β ,
which simplifies to
i = 1 n det λ 1 u i α λ + u i β = i = 1 n ( λ 2 + u i β λ + u i α ) .
For the second term, corresponding to the leader subsystem under game-based control, we have
det ( λ I 2 m ( Ξ 1 K A ) ) = j = n + 1 n + m det λ I 2 ε j 0 1 0 0 = j = n + 1 n + m det λ ε j 0 λ ,
which gives
j = n + 1 n + m ( λ 2 ) ,
therefore, the full characteristic polynomial is
det ( λ I Γ ) = i = 1 n ( λ 2 + u i β λ + u i α ) · j = n + 1 n + m λ 2 .
This confirms that the eigenvalues of Γ are the roots of f i c ( λ ) = λ 2 + u i β λ + u i α for i = 1 , , n , and λ = 0 (with multiplicity 2 m ) for the leaders. The stability of the system depends on the Hurwitz stability of f i c ( λ ) , which is analyzed in the subsequent steps.
The relevant pairs of the real part and the imaginary part are m ϖ = ϖ 2 + α Re ϖ β Im u i ϖ and n ( ϖ ) = α Im ( u i ) + β Re ( u i ) ϖ , respectively. f i c λ i is Hurwitz stable if and only if the following requirements are met:
  • The polynomial m ( ϖ ) possesses two distinct real roots, denoted by m 2 > m 1 ;
  • The crossing condition m 2 > n 1 > m 1 is satisfied, where n 1 is the unique root of n ( ϖ ) ;
  • n ( 0 ) m ( 0 ) n ( 0 ) m ( 0 ) > 0 .
The two roots of m ( ϖ ) = 0 are m 1 = β Im ( u i ) β 2 Im 2 u i + 4 α Re u i 2 and m 2 = β Im u i + β 2 Im 2 u i + 4 α Re u i 2 , which satisfy (1). The root of n ( ϖ ) = 0 is n 1 = α Im ( u i ) β Re ( u i ) , according to condition (2), β 2 α > Im 2 ( u i ) Re ( u i ) | u i | 2 is obtained. It can be seen from the expression of m ( ϖ ) and n ( ϖ ) that m 0 = α Re u i , m 0 = β Im u i , n 0 = α Im u i and n 0 = β Re u i . According to condition (3), we get α β u i 2 > 0 . For g i c i ϖ , follow the steps above, κ i 2 ϑ i > Re ( ε i ) Im 2 ( ε i ) is obtained. After organizing, if the condition (29) in Theorem 1 holds, the CT game control system is Hurwitz stable, indicateing all the eigenvalues of Γ have negative real parts.
Similarly, in the DT game control setting, the characteristic polynomial of Π is
det λ I 2 ( n + m ) Π = det λ I 2 n I n C + J f B 0 2 n × 2 m 0 2 m × 2 n λ I 2 m Ξ 1 K C = i = 1 n det λ i 1 1 u i α λ i 1 + u i β j = n + 1 n + m det λ j ε j ε j 0 λ j ε j = i = 1 n det λ i 2 + u i β 2 λ i + 1 + u i α u i β j = n + 1 n + m det λ j 2 2 λ j ε j + ε j 2 = i = 1 n f i d λ i j = n + 1 n + m g j d λ i ,
where f i d λ i = λ i 2 + u i β 2 λ i + 1 + u i α u i β and g i d λ i = λ j 2 2 λ j ε j + ε j 2 . However, under DT dynamics, the appropriate stability criterion is Schur stability rather than Hurwitz stability. To this end, bilinear transformations λ = s + 1 s 1 is applied. Then we get
f ¯ i d s = s 1 2 f i d s + 1 s 1 = u i α s 2 + 2 u i β u i α s + u i α 2 u i β + 4 ,
and
g ˜ j d s = s 2 + 2 s 1 2 ε j + 1 + 2 ε j + ε j 2 1 2 ε j .
Considering u i α 0 , 1 2 ε j 0 , we have
f ˜ i d s = s 2 + 2 β α 1 s + 1 2 β α + 4 u i α ,
and
g ˜ j d s = s 2 + 2 s 1 2 ε j + 1 + 2 ε j + ε j 2 1 2 ε j .
Hence, Hurwitz stability of f ˜ i d s and g ˜ j d s leads to Schur stability of f i d s and g j d s . The proof of Hurwitz stability is similar to the previous text. According to conditions (a)–(c), we obtain 2 β α < 4 Re u i u i 2 , ( α + β ) | u i | 2 α 2 α β 2 β 2 | u i | 2 + 4 Re ( u i ) > 4 Im 2 u i α and 4 Re ( u i ) > u i 2 2 β α . Correspondingly, g ˜ j d s is Hurwitz stable if
4 Im 2 ε j > 4 ε j 2 1 γ , γ 1 + 2 Re ε j + ε j 2 Im 2 ε j + 2 Im 2 ε j > 0 , 1 + 2 Re ε j 4 Im ε j + γ 1 4 ε j 2 > 1 4 ε j 2 4 Re ε j + 7 ε j 2 Im ε j ,
where γ = 2 ε j 2 Re ε j + 4 Re ε j + 5 ε j 2 + 1 . After calculating, if the condition (30) in Theorem 1 is satisfied, the DT game control system is Schur stable, which means all eigenvalues of Π are within the unit circle.
Therefore, when A1 and A2 in Theorem 2 are satisfied, system (31) meets Definition 2, which means the CT-DT switching dynamic MAS can achieve consensus.
(Necessity) Suppose the directed communication graph G does not contain a directed spanning forest, meaning a minimum of one follower cannot receive information from the leader. This implies that the state of that follower is not influenced by the leader, and thus, the system cannot achieve consensus. If the CT-DT switching dynamic MAS can achieve consensus under arbitrary switching, Definition 2 must be satisfied, so that f i c , g i c is necessarily Hurwitz stable and f i d , g i d is Schur stable. This can suggest the necessity. □
Remark 2.
The system operates in two distinct modes—CT game control and DT game control—between which it switches arbitrarily. Only one mode is active at any time.
This theorem provides the necessary and sufficient conditions for achieving consensus in the Continuous–Discrete switching multi-agent system. Condition A1 requires the communication graph to contain a directed spanning forest, which ensures that information can flow from the leaders to all followers. Condition A2 imposes constraints on the control gains α and β , guaranteeing system stability under arbitrary switching dynamics. Collectively, these conditions ensure that all agents’ states converge to a common value, even as the system switches arbitrarily between continuous-time and discrete-time dynamics.

4. Control Input with Sampled Data

To provide a clearer and more intuitive overview of the proposed control system architecture, a block diagram illustrating the interaction among the game-based decision-making module, the unified sampled-data controller, and the switching multi-agent system is presented in Figure 1.
Figure 1. Game-Based Switching Control System for Multi-Agent Systems.
We use the same control protocol for systems (1) and (3), the sampled data protocol is designed as
u i · = α i N i a i j x j t k x i t k + β j N i a i j v j t k v i t k .
Under this protocol, the system (1) is updated to
φ ˙ t = I n + m A φ t L B φ t k ,
where φ t = φ 1 T t , φ 2 T t , . . . , φ n + m T t T , φ i t = x i t , v i t T . Under the influence of the game framework, the CT system is updated to the CT game control system
φ ˙ t = A Ψ φ t B L φ t k .
Let ϕ t = P 1 I 2 φ t , then (36) could be written as
ϕ ˙ ( t ) = P 1 I 2 φ ˙ t = Ψ A ϕ t J B ϕ t k .
For DT game control systems, by similar transformations, we have
ϕ t k + 1 = Ψ C ϕ t J B ϕ t k ,
where C = 1 1 0 1 .
We have made the following assumptions.
B1
G contains a directed spanning forest.
B2
The control gain of the system simultaneously satisfies (39) and (40).
2 1 c i j β + α α c i j > h , 1 c i j 2 c i j Re u i u i 2 h β + α 2 h 2 c i j + 1 3 β + α 2 h Im u i u i 2 h > 0 ,
4 Re u i > u i 2 2 β α , α + β u i 2 α 2 α β 2 β 2 u i 2 + 4 Re u i > 4 Im 2 u i α .
Theorem 3.
The CT-DT switching MAS (37)–(38) achieves consensus if and only if conditions B1 and B2 are satisfied.
Remark 3.
This protocol implements a unified sampled-data control framework applicable to both CT and DT systems. During system switching, the controller avoids frequent switching between two control strategies, it only needs to adjust the dynamic behavior of the system.
This theorem introduces a unified sampled-data control protocol applicable to both CT and DT subsystems. Its primary advantage lies in avoiding frequent controller switching during system mode changes, thereby simplifying implementation. Conditions B1 and B2 pertain to the graph structure and the constraints on control gains and the sampling period h, respectively. These conditions ensure the system converges to consensus even with sampled information. This control strategy is particularly suitable for practical hybrid systems that require a unified control framework.

5. Simulations

Consider a directed graph consisting of 7 agents, where R1, R2, R3 are leaders and F1, F2, F3, F4 are followers, as shown in Figure 2. Clearly, it contains a spanning forest.
Figure 2. Directed graph of CT-DT switching MAS.
The Laplacian matrix L is constructed from the directed graph topology shown in Figure 1. For a directed graph G = ( V , E , A ) , the Laplacian matrix L = [ l i j ] is defined by
l i j = k N i a i k , if i = j , a i j , if i j ,
where a i j > 0 if there exists a directed edge from agent j to agent i, otherwise a i j = 0 . Based on the communication topology in Figure 1, the adjacency matrix A is first determined. Subsequently, the Laplacian matrix L is computed as follows
L = 1 0 0 0 1 0 0 1 3 0 1 0 1 0 0 1 2 0 0 0 1 1 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
whose eigenvalue are u 1 = u 2 = u 3 = 0 , u 4 = 1 , u 5 = 1.2451 , u 6 , 7 = 2.8774 ± 0.7449 i . The matrix is partitioned according to the leader–follower structure, where the first four rows and columns correspond to followers F1–F4, and the last three rows and columns correspond to leaders R1–R3. Then the eigenvalues of Ξ 1 K could be calculated as ϵ 1 = 1 , ϵ 2 = 0.6308 , ϵ 3 = 0.1692 . The initial state of each agent is specified as x t 0 = 24 , 18 , 30 , 0 , 24 , 6 , 12 T and v t 0 = 10 , 0 , 12 , 22 , 8 , 20 , 16 T , The initial position vector and velocity vector are intentionally set to be dispersed in both magnitude and direction. This setup ensures that the system starts from a challenging configuration, thereby effectively demonstrating the convergence capability of the control protocols. The following example is completed to verify the correctness of Theorems 2 and 3.
Example 1.
Consider the CT-DT switching system (1)–(4) under protocol (31) and stochastic switching, with α = 0.5 . According to Theorem 2, the admissible interval of β is derived as 0.1039 < β < 2.15 . Then we select β = 0.05 and β = 0.6 to test whether consensus could be achieved. It can be seen from Figure 3 and Figure 4 that when β = 0.05 , the system cannot reach consensus. Instead, when β = 0.6 , Figure 5 and Figure 6 signify that the system achieves consensus, which verifies the precision of Theorem 2.
Figure 3. Response of agent positions for α = 0.5 , β = 0.05 .
Figure 4. Evolution of agent velocities for α = 0.5 , β = 0.05 .
Figure 5. Response of agent positions for α = 0.5 , β = 0.6 .
Figure 6. Evolution of agent velocities for α = 0.5 , β = 0.6 .
As shown in Figure 3 and Figure 4, the system fails to achieve consensus. The positions and velocities of the agents diverge over time, indicating instability. This aligns with Theorem 2, as β = 0.05 violates the gain condition (29), leading to the loss of Hurwitz/Schur stability in the error dynamics.
Figure 5 and Figure 6 demonstrate that all agents successfully achieve consensus. The position and velocity trajectories converge to common values, confirming the theoretical prediction. The convergence is smooth and occurs within approximately 15 s, illustrating the effectiveness of the proposed game-based control strategy under switching dynamics.
Example 2.
Under protocol (34) with stochastic switching, the CT-DT system (1)–(4) is analyzed with the parameter value set to α = 0.33 . From Theorem 3, we get the admissible interval β is 0.1313 < β < 1.756 . Afterwards we set β = 0.5 to determine the appropriate range for h with 0 < h < 1.143. We select h = 0.8 and h = 1.5 . Figure 7 and Figure 8 demonstrate that the system achieves consensus when h = 0.8 . In contrast, Figure 9 and Figure 10 show that consensus is not attained when h = 1.5 , thereby confirming the validity of Theorem 3.
Figure 7. Response of agent positions for α = 0.33 , β = 0.5 and h = 0.8 .
Figure 8. Evolution of agent velocities for α = 0.33 , β = 0.5 and h = 0.8 .
Figure 9. Response of agent positions for α = 0.33 , β = 0.5 and h = 1.5 .
Figure 10. Evolution of agent velocities for α = 0.33 , β = 0.5 and h = 1.5 .
Figure 7 and Figure 8 show that the system achieves consensus. The trajectories converge steadily, validating the sampled-data control approach. The smaller sampling period ensures sufficient data updates to maintain stability under switching.
Figure 9 and Figure 10 reveal that consensus is not attained. The system exhibits oscillatory and divergent behavior, consistent with the violation of the maximum allowable sampling period This underscores the critical role of the sampling period in h m a x = 1.143 maintaining system stability under sampled-data control.
Remark 4.
The initial states selected in the simulations are representative of a general case with significant initial disagreement. In practice, the proposed control strategies are robust to variations in initial conditions, as the consensus achievement is structurally ensured by the directed spanning forest in the communication graph and the designed game-based control laws.
To address the practical contribution and performance of the proposed method, we provide a comparative analysis based on the convergence behavior observed in our simulations. While the primary goal of this paper is to establish the theoretical foundation and prove consensus achievement under the game-theoretic framework, the simulation results inherently demonstrate favorable performance characteristics.
This convergence performance can be favorably compared with existing non-game-theoretic approaches for switching systems. Compared to the classic consensus protocols for switched systems in [], which often exhibit slower convergence due to passive reaction to neighborhood errors, our game-based approach enables agents (especially leaders) to proactively adjust their strategies by anticipating others’ actions. This strategic behavior, guided by the cost functions, leads to a more coordinated and efficient path to consensus, as reflected in the swift settling time in our results.Compared to the sampled-data control method in [], our unified protocol (Example 2) not only avoids frequent controller switching but also, as seen in Figure 7 and Figure 8, maintains a fast convergence rate. This suggests that the integration of the game-theoretic decision-making process enhances the system’s coordination efficiency even under a unified control structure.

6. Conclusions and Future Work

In this paper, the leader–follower consensus problem in second-order MASs with switching dynamics, composed of both DT and CT subsystems, is investigated. The competitive interactions among multiple leaders are modeled as a multi-player game, with the competition mechanism characterized by the designed game rules. A cost function is assigned to each player, who aims to minimize their own cost for global optimization, and the unique NE of the game is derived. Based on this, two types of linear control protocols are proposed to achieve consensus. The first protocol employs CT control for CT subsystems and DT control for DT subsystems, leading to sufficient and necessary conditions on the network topology and coupling gain for consensus. The second protocol adopts a unified sampled-data control algorithm for both subsystems, with corresponding sufficient and necessary conditions established. Numerical simulations under various scenarios validate the effectiveness of both control strategies in dynamic switching topologies. Future research will focus on consensus control in switching dynamical systems with constraints on sampled position data or communication delays.

Author Contributions

Conceptualization, B.L., P.W. and Z.J.; methodology, B.L. and P.W.; software, H.W.; validation, B.L. and H.W.; formal analysis, B.L. and H.W.; writing—original draft preparation, B.L. and Z.J.; writing—review and editing, P.W. and Z.J.; supervision, Z.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 62373205 and 62033007), Taishan Scholars Project of Shandong Province of China (No. tstp20230624), Taishan Scholars Climbing Program of Shandong Province of China, and Systems Science Plus Joint Research Program of Qingdao University (XT2024101).

Data Availability Statement

The original contributions presented in this study are included in the article. For further inquiries, please contact the corresponding author.

Acknowledgments

The authors thank the editors and anonymous reviewers for their valuable and constructive suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Theorem 3

Proof of Theorem 3.
(Sufficiency). For the system (37), we have
ϕ ˙ i t = c i j A ϕ i t u i B ϕ i t k ,
it can be changed into
e c i j A t ϕ i t = e c i j A t u i B ϕ i t k .
By integrating both sides of the equation over the interval [ t k , t ] , one obtains
ϕ i t = e c i j A t t k ϕ i t k e c i j A t t k t e c i j A s u i B ϕ i t k d s = c i j α 2 u i t t k 2 c i j t t k β 2 u i t t k 2 α u i t t k 1 β u i t t k ϕ i t k .
It can be rewritten as
ϕ i t = C t t k ϕ i t k ,
with C i t = c i j α 2 u i t 2 c i j t β 2 u i t 2 α u i t c i j β u i t . Since C ( t ) is bounded on [ 0 , h ] , one has
ϕ i t = C i t t k C i k h ϕ i t 0 ,
where t k + 1 t k = h .
For the DT game control system (38), we have
ϕ t k + 1 = D i ϕ t k = D i k ϕ t 0 ,
where D i = Ψ C J B .
For t > 0 , the time is given by t = t c + k h , where t c and k h are the total operation times of the CT and DT systems, respectively. The above analysis follows that the switching control system is
ϕ i t = C i t c t k C i k h D i k ϕ i t 0 .
Since C i ( t c t k ) is bounded, ϕ i ( t ) 0 if the eigenvalues of C i ( h ) and D i are within the unit circle. Then, we obtain the characteristic polynomial of C i ( h ) as:
det λ I 2 C i h = λ 2 + β u i h 2 c i j + α 2 u i h 2 λ + c i j 2 c i j u i h β + α 2 h .
Let f ^ i λ = λ 2 + β u i h 2 c i j + α 2 u i h 2 λ + c i j 2 c i j u i h β + α 2 h . Substituting λ = s + 1 s 1 for f ^ i , one has
f ˇ i ( s ) = s 1 2 f ^ i s + 1 s 1 = c i j 1 2 + β + α 2 c i j β + α 2 h u i h s 2 + 2 1 c i j 2 c i j u i h β + α 2 h s + c i j + 1 3 β + α 2 h u i h .
If c i j 1 2 + β + α 2 c i j β + α 2 h u i h 0 , then
f ^ i s = s 2 + 2 1 c i j 2 c i j u i h β + α 2 h c i j 1 2 + β + α 2 1 c i j u i h s c i j + 1 3 β + α 2 h u i h c i j 1 2 + β + α 2 1 c i j u i h .
Then, f ^ i ( s ) being Hurwitz stable could induce Schur stability in f ^ i ( λ ) . The Hurwitz stability analysis for f ^ i ( s ) follows a similar procedure to the analysis of f i d ( λ ) presented in Theorem 2. After simplification, we can conclude that the CT game control system is Schur stable if and only if (39) in Theorem 3 is satisfied, which means all eigenvalues of C i ( h ) are within the unit circle.
Similarly, if (40) in Theorem 3 holds, the DT game control system is Schur stable.
Therefore, under the simultaneous satisfaction of B1 and B2 in Theorem 3, it follows from (44) that lim t ϕ ( t ) = 0 and lim t k ϕ ( t k ) = 0 . Then, according to Definition 2, the CT-DT game switching control system can achieve consensus.
(Necessity) The necessity proof parallels that of Theorem 2 and is therefore omitted. □

References

  1. Qin, J.; Ma, L.; Li, M.; Zhang, C.; Fu, W.; Liu, Q.; Zheng, W. Recent advances on multi-agent collaboration: A cross-perspective of game and control theory. Acta Autom. Sin. 2025, 51, 489–509. [Google Scholar]
  2. Ji, Z.; Yu, H. A new perspective to graphical characterization of multiagent controllability. IEEE Trans. Cybern. 2017, 47, 1471–1483. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, J.; Ji, Z.; Liu, Y.; Lin, C. Unified understanding and new results of controllability model of multi-agent systems. Int. J. Robust Nonlinear Control 2022, 32, 6330–6345. [Google Scholar] [CrossRef]
  4. Kang, L.; Ji, Z.; Liu, Y. Consensus of stochastic multi-agent systems with time-delay and Markov jump. Int. J. Syst. Sci. 2024, 55, 4661–4672. [Google Scholar] [CrossRef]
  5. Sun, Y.; Ji, Z.; Liu, Y.; Lin, C. On stabilizability of multi-agent systems. Automatica 2022, 144, 110491. [Google Scholar] [CrossRef]
  6. Zhao, F.; Gao, W.; Jiang, Z.P.; Liu, T. Event-triggered adaptive optimal control with output feedback: An adaptive dynamic programming approach. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 5208–5221. [Google Scholar] [CrossRef]
  7. Liu, Z.; Bechlioulis, C.P.; Huang, J.; Wen, C.Y. Nonlinear dynamic surface control for a class of uncertain systems with novel nonlinear filters. IEEE Trans. Autom. Control 2025. [Google Scholar] [CrossRef]
  8. Yao, D.; Dou, C.; Yue, D.; Xie, X. Event-triggered practical fixed-time fuzzy containment control for stochastic multiagent systems. IEEE Trans. Fuzzy Syst. 2022, 30, 3052–3062. [Google Scholar] [CrossRef]
  9. Li, Q.; Yang, H.; Yu, C. LP-based leader-following positive consensus of T–S fuzzy multi-agent systems. Mathematics 2025, 13, 3146–3155. [Google Scholar] [CrossRef]
  10. Ni, X.; Yi, K.; Jiang, Y.; Zhang, A.; Yang, C. Consensus control of leaderless and leader-following coupled PDE-ODEs modeled multi-agent systems. Mathematics 2022, 10, 201–215. [Google Scholar] [CrossRef]
  11. Zhao, F.; Luo, S.; Gao, W.; Wen, C. Event-trigged cooperative adaptive optimal output regulation for multiagent systems under switching network: An adaptive dynamic programming approach. IEEE Trans. Syst. Man Cybern. Syst. 2024, 55, 1707–1721. [Google Scholar] [CrossRef]
  12. DeGroot, M.H. Reaching a consensus. J. Am. Stat. Assoc. 1974, 69, 118–121. [Google Scholar] [CrossRef]
  13. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
  14. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  15. Liu, K.; Ji, Z. Dynamic event-triggered consensus of general linear multi-agent systems with adaptive strategy. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3440–3444. [Google Scholar] [CrossRef]
  16. Liu, B.; Lu, W.; Chen, T. Consensus in continuous-time multiagent systems under discontinuous nonlinear protocols. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 290–301. [Google Scholar] [CrossRef]
  17. Tian, L.; Ji, Z.; Liu, Y.; Lin, C. A unified approach for the influences of negative weights on system consensus. Syst. Control Lett. 2021, 160, 105–109. [Google Scholar] [CrossRef]
  18. Goebel, R.; Sanfelice, R.G.; Teel, A.R. Hybrid Dynamical Systems: Modeling, Stability, and Robustness; Princeton University Press: Princeton, NJ, USA, 2012; pp. 1–279. [Google Scholar]
  19. Li, G.; Li, Z.; Kan, Z. Assimilation control of a robotic exoskeleton for physical human-robot interaction. IEEE Robot. Autom. Lett. 2022, 7, 2977–2984. [Google Scholar] [CrossRef]
  20. Susuki, Y.; Koo, T.J.; Ebina, H.; Yamazaki, T.; Ochi, T.; Uemura, T.; Hikihara, T. A hybrid system approach to the analysis and design of power grid dynamic performance. Proc. IEEE 2012, 100, 225–239. [Google Scholar] [CrossRef]
  21. Yang, L.; Constantiescu, D.; Wu, L. Event-triggered connectivity maintenance of a teleoperated multi-robot system. IEEE Trans. Control Netw. Syst. 2025, 12, 430–437. [Google Scholar] [CrossRef]
  22. Zheng, Y.; Ma, J.; Wang, L. Consensus of hybrid multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1359–1365. [Google Scholar] [CrossRef]
  23. Zheng, Y.; Zhao, Q.; Ma, J.; Wang, L. Second-order consensus of hybrid multi-agent systems. Syst. Control Lett. 2019, 125, 51–58. [Google Scholar] [CrossRef]
  24. Zhai, G.; Liu, D.; Imae, J.; Kobayashi, T. Lie algebraic stability analysis for switched systems with continuous-time and discrete-time subsystems. IEEE Trans. Circuits Syst. II Express Briefs 2006, 53, 152–156. [Google Scholar] [CrossRef]
  25. Zheng, Y.; Wang, L. Consensus of switched multiagent systems. IEEE Trans. Circuits Syst. II Express Briefs 2016, 63, 314–318. [Google Scholar] [CrossRef]
  26. Lin, X.; Zheng, Y.; Wang, L. Consensus of switched multi-agent systems with random networks. Int. J. Control 2017, 90, 1113–1122. [Google Scholar] [CrossRef]
  27. Liu, Y.; Su, H. Second-order consensus for multiagent systems with switched dynamics and sampled position data. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4129–4137. [Google Scholar] [CrossRef]
  28. Liu, Y.; Su, H.; Zeng, Z. Second-order consensus for multiagent systems with switched dynamics. IEEE Trans. Cybern. 2022, 52, 4105–4114. [Google Scholar] [CrossRef]
  29. Ma, J.; Ye, M.; Zheng, Y.; Zhu, Y. Consensus analysis of hybrid multiagent systems: A game-theoretic approach. Int. J. Robust Nonlinear Control 2019, 29, 1840–1853. [Google Scholar] [CrossRef]
  30. Wang, P.; Ji, Z.; Liu, X. Consensus of first-order hybrid multi-agent systems based on game theory. Complex Syst. Complex. Sci. 2025. Available online: https://link.cnki.net/urlid/37.1402.N.20241209.1306.002 (accessed on 11 October 2025).
  31. Xie, K.; Lu, M.; Deng, F.; Sun, J.; Chen, J. Data-driven dynamic output feedback nash strategy for multi-player non-zero-sum games. J. Syst. Sci. Complex. 2025, 38, 597–612. [Google Scholar] [CrossRef]
  32. Gantmakher, F.R. The Theory of Matrices; American Mathematical Society: Providence, RI, USA, 1959. [Google Scholar]
  33. Parks, P.C.; Hahn, V. Stability Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1993. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.