Next Article in Journal
Dynamics Model of a Multi-Rotor UAV Propeller and Its Fault Detection
Next Article in Special Issue
Cooperative Path Planning for Multiple UAVs Based on APF B-RRT* Algorithm
Previous Article in Journal
Development of a High-Reliability Hybrid Data Transmission System for Unmanned Surface Vehicles Under Interference Conditions
Previous Article in Special Issue
A Binocular Vision-Assisted Method for the Accurate Positioning and Landing of Quadrotor UAVs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consensus-Based Formation Control for Heterogeneous Multi-Agent Systems in Complex Environments

1
Unmanned System Research Institute, Northwestern Polytechnical University, Xi’an 710072, China
2
National Key Laboratory of Unmanned Aerial Vehicle Technology, Northwestern Polytechnical University, Xi’an 710072, China
3
School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072, China
4
National Key Laboratory of Aircraft Configuration Design, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(3), 175; https://doi.org/10.3390/drones9030175
Submission received: 17 January 2025 / Revised: 8 February 2025 / Accepted: 24 February 2025 / Published: 26 February 2025
(This article belongs to the Special Issue Swarm Intelligence in Multi-UAVs)

Abstract

The purpose of this paper is to develop formation control strategies for heterogeneous multi-intelligent-agent systems in complex environments, with the goal of enhancing their performance, reliability, and stability. Complex flight conditions, such as navigating narrow gaps in urban high-rise buildings, pose considerable challenges for agent control. To address these challenges, this paper proposes a consensus-based formation strategy that integrates graph theory and multi-consensus algorithms. This approach incorporates time-varying group consistency to strengthen fault tolerance and reduce interference while ensuring obstacle avoidance and formation maintenance in dynamic environments. Through a Lyapunov stability analysis, combined with minimum dwell time constraints and the LaSalle invariance principle, this work proves the convergence of the proposed control scheme under changing network topologies. Simulation results confirm that the proposed strategy significantly improves system performance, mission execution capability, autonomy, synergy, and robustness, thereby enabling agents to successfully maintain formation and avoid obstacles in both homogeneous and heterogeneous clusters in complex environments.

1. Introduction

A heterogeneous multi-intelligent-agent system (MAS) comprises multiple types of agents with diverse functionalities to collaboratively accomplish complex tasks. Compared with homogeneous intelligent agent systems, heterogeneous systems have significant advantages in task allocation, resource utilization, flexibility, and robustness. By employing rational task division, efficient information sharing, and strategic path planning, heterogeneous multi-intelligent-agent systems are able to perform tasks safely and efficiently in complex terrains and multi-obstacle environments.
While the potential of these systems is vast, they are particularly challenged in dynamic, real-world environments, especially when it comes to maintaining stable formations under rapidly changing conditions. The effectiveness of traditional formation control strategies is limited in such contexts, especially for homogeneous systems. For instance, in scenarios like navigating narrow gaps between high-rise buildings or conducting search and rescue missions in mountainous, unpredictable environments, single-agent systems fail to meet the required precision and coordination. In contrast, multi-agent formation control strategies introduce more promising solutions. However, even these strategies face significant hurdles. Complex terrain, dynamic environmental factors, and coordination difficulties inherent in heterogeneous agents complicate the task further. Though traditional centralized control methods excel in accuracy, they lack scalability and fault tolerance, making them unsuitable for larger or more complex agent systems. Distributed control strategies, on the other hand, offer a more scalable and fault-tolerant solution, but they are not without their own challenges, including maintaining global coordination in dynamic environments. In recent years, many studies have focused on developing distributed control algorithms, including consistency algorithms, leader–follower methods, and graph-theory-based techniques. These approaches have made significant progress in multi-agent formation control, but they are still limited by the inherent complexity of dynamic environments. The fundamental challenge remains how to effectively coordinate heterogeneous agents across complex, real-time environments. Additionally, while many solutions aim to enhance system robustness and efficiency, they often overlook real-time adaptability and precise coordination among agents. Table 1 presents an overview of the limitations found in existing methods when applied to complex environments.
The core idea of distributed formation control strategy is to achieve global coordinated control through localized information interactions. Classical distributed control algorithms include consistency algorithms, leader–follower method and graph theory based methods. Coherence algorithms achieve state consistency among agents through the design of tailored protocols. Thiago et al. [1] presented a nonlinear robust formation flight controller for a swarm of unmanned aerial vehicles. It is based on the virtual leader approach and is capable of establishing and maintaining formations with time-varying configurations. Cappello et al. [2] designed a distributed control law based on linear quadratic differential countermeasures for a linear heterogeneous multi-agent system. Tan et al. [3], under a fixed communication topology, adopted a prediction-based control protocol to enable multi-agent systems with saturation constraints and communication delays to achieve global consensus. Wang et al. [4] combined algebraic graph theory with a distributed backstepping approach to propose a powerful distributed adaptive tracking control method to address inherent nonlinearities and uncertainties in system parameters. Chen et al. [5] designed formation containment control algorithms for second-order nonlinear multiagent systems with communication delays, and used variable gain techniques to eliminate the effect of communication delays on leader formation control. Zhang et al. [6] studied the distributed H consensus problem for heterogeneous multi-agent systems with non-convex constraints. Li et al. [7] studied the problem of cooperative output regulation for linear multi-agent systems, designed a distributed adaptive observer-based controller, and further proposed a distributed output feedback control method for the special case of undirected graphs.
Although homogeneous intelligent agent formation control has achieved remarkable results in practical applications, it still faces certain limitations in complex environments. To address the challenges of formation control in complex environments, heterogeneous multi-agent formation control strategies have been developed. Heterogeneous formation control faces challenges in modeling, control and communication, and requires more flexible and adaptive control methods. In recent years, control strategies for heterogeneous multi-intelligent-agent systems, such as smooth-mode control and neural network-based control methods, have received extensive attention. Lu et al. [8] investigated the consensus problem in heterogeneous multi-agent systems comprising second-order linear and nonlinear agents. Input saturation algorithms and input unsaturation algorithms were proposed to ensure consensus among agents within the heterogeneous systems. Park et al. [9] distributed connectivity protection and collision avoidance formation tracking for networked uncertain underactuated surface systems with heterogeneous limited communication ranges. Xiong et al. [10] investigated the discrete-time-formation containment-tracking control problem for linear heterogeneous multi-unmanned aircraft systems under leader–follower formation with switching directional topology and unknown external perturbations, and proposed a distributed containment control strategy. Hua et al. [11] designed a distributed sliding mode observer to implement formation tracking control with unknown leader inputs. Combining finite-time stability theory and event-triggered mechanism, Duan et al. [12] proposes a control protocol based on leader state observer to realize consistency.
The development of consistency theory provides theoretical support for distributed formation control methods. Group consistency has gradually gained attention as an effective formation control method. Grouping consistency refers to dividing agents into groups, achieving intra-group consistency while maintaining appropriate inter-group positions. The grouping-consistency algorithm based on group behavior, groups agents using clustering methods, where each group independently achieves consistency, thus simplifying the complexity of formation control. The group consistency strategy of time-varying formation can dynamically adjust the formation structure according to the environmental changes and mission requirements, thus enabling flexible formation control. The application of group consistency in heterogeneous multi-intelligent-agent systems shows its significant advantages in improving system autonomy, synergy, and robustness. Duan et al. [13] explored fixed-time time-varying output formation-holding control for a heterogeneous general-purpose multi-agent system consisting of a virtual leader, multiple leaders, and followers. Hua et al. [14] investigated the problem of fault-tolerant time-varying formation control for high-order linear multi-agent systems in the presence of actuator failures. A fully distributed formation control protocol was proposed, utilizing an adaptive online update strategy. Lan et al. [15] explored a distributed time-varying optimal formation protocol for a class of second-order uncertain nonlinear dynamic multi-agent systems based on an adaptive neural network state observer with backstepping and simplified reinforcement learning. Mo et al. [16] investigated mean-square H antagonistic formation control for second-order multi-agent systems with multiplicative noise and external interference in a directed symbol topology. Zhao et al. [17] studied the group consensus problem for discrete-time multi-agent systems with fixed and randomly switching topologies.
In summary, multi-intelligent-agent formation control strategies have made significant progress in distributed control, homogeneous and heterogeneous intelligent agent control, and group consistency strategies. However, formation control in complex flight environments continues to encounter significant challenges that necessitate further research and exploration. Future research directions include optimizing existing control algorithms, developing more adaptive and intelligent control strategies, and verifying the effectiveness of new approaches through simulation and field experiments. These studies will provide important support for the performance enhancement of multi-intelligence systems and the enhancement of mission execution capabilities in practical applications.
In this article, a consensus-based formation approach is proposed for heterogeneous multi-intelligent-agent systems, leveraging time-varying group consistency to reinforce fault tolerance and anti-jamming capacity. Specifically, the following three contributions are emphasized:
First, building on distributed-consensus theory, we derive the conditions for stable and flexible formation control in complex environments. By incorporating time-varying formation, we transform the problem of maintaining group consistency under changing topologies into a more tractable framework, simplifying the analysis of system stability. Second, we introduce a parametric mechanism that enables dynamic subgrouping and diverse convergence objectives. This mechanism defines analytical bounds on formation parameters to rapidly generate feasible configurations, ensuring real-time responsiveness to varying mission demands. Third, we propose a combined strategy that merges the theoretical rigor of our group consistency approach with robust Lyapunov-based stability proofs including minimum dwell time constraints and the LaSalle invariance principle to guarantee the system remains stable when formation configurations change over time.
Although existing approaches address some components of multi-agent formation control, our proposed methodology distinguishes itself by combining nonlinear dynamics, optimal subgroup coordination, and real-time adjustments to formation structures. This comprehensive approach significantly enhances the system’s self-sufficiency, collaborative performance, and overall resilience, thereby presenting strong potential for future applications in heterogeneous multi-intelligent-agent systems.

2. Methods

2.1. Graph Theory

Graph theory is a mathematical framework that studies graphs composed of nodes (points) and edges (lines) that connect these nodes. Multi-intelligent-agent systems use graph theory to study and characterize their network topology. In multi-intelligent-agent systems, the graph theory is employed to analyze and characterize network topology, where individual agents are represented as nodes, and their communication links are modeled as edges. This representation enables a comprehensive algebraic characterization of the system’s structure, which is further analyzed using stability theory.
In this paper, weighted undirected graphs are adopted to model communication relationships to describe the communication relations in a multi-intelligent-agent system. The definition of undirected graph in this paper is first given G .
Definition 1
([18]). For a multi-intelligent-agent system with a single leader  G = ( V , E , A ) where V = { v 0 , v 1 , , v N } is the set of nodes, the v 0 represents the leader in the multi-intelligent-agent system, and matrix B = d i a g { a i 0 , a 20 , a N 0 } is the adjacency matrix of the leader; if there is a communication transmission between nodes v i and v 0 , then a i 0 >   0 . A = a i j R N × N is the graph G the weighted adjacency matrix of the leader; if v i , v j E , then a i j > 0 ; otherwise, a i j = 0 . D = d i j R N × N is the in-degree matrix, d i i = j = 1 N   a i j , d i j = 0 , i j . The Laplace matrix is defined as L = D A ; then, there are L = l i j R N × N that H = L + B . In the subsequent research of this paper, if there is communication between the leader and the follower, take a i 0 = 1 ; otherwise, it is a i 0 = 0 .
Definition 2
([18]). For multi-leader multi-intelligent-agent systems  G = ( V , E , A ) , where  V = v 1 , v 2 , , v N , v N + 1 , , v N + M , N and  M represent followers and leaders. Respectively, where followers form a subgraph G F = v f , e f , a f , nodes  v f = v 1 , v 2 , v N , and the set of edges  e f v f × v f . The graph  G of the Laplace matrix is defined as  L = D A , where A = a i j R ( N + M ) × ( N + M ) is the adjacency matrix, and D = d i a g D 1 , D 2 , , D N , D N + 1 , , D N + M R ( N + M ) × ( N + M ) is the incidence matrix, where  D i = j = 1 N + M   a i j is the degree of the node  i . The Laplace matrix is of the following form:
L = L F L L 0 M × N 0 M × M
where  L F R N × N , L L R N × N , the matrix  L F ’s non-diagonal elements reflect the  N relationships between the individual followers, while the matrix  L L reflects the neighborhood relationship between the followers and the leader. Since the leader can only send messages to subordinate followers, in the Laplace matrix  L , the leader does not have any incoming edges, i.e., no incoming degree nodes. Therefore, the Laplace matrix  L  has all 0 rows, indicating that the leader has no incoming nodes. M  rows are all 0, indicating that there is no direct relationship between the leader and other members.
Lemma 1
([18]). For an undirected graph  G , its Laplace matrix  L  has the following characteristics:
  • L  has a zero eigenvalue, whose corresponding eigenvector is  1 N = ( 1,1 , , 1 ) T , and all its nonzero eigenvalues are positive real numbers.
  • L  is a semipositive definite symmetric matrix and satisfies  1 T L = 0 .
  • The eigenvalues of  0 , λ 1 , λ 2 , λ N , and  0 λ 1 λ 2 λ N . It is common to define the second smallest eigenvalue of the matrix  L  as  λ 1 , to be the algebraic connectivity of the graph  G  of the algebraic connectivity while having the following:
    λ 1 ( L ) = m i n x T 1 1 , x = 0 , x 0   x T L x x T x
  • For any vector  x = x 1 , x 2 , x 3 , , x N T R N , satisfy the following:
x T L x = 1 2 i = 1 N   j = 1 N   a i j x j x i 2
Multi-intelligent-agent systems must consider communication constraints during the design of formation-grouping strategies, where the communication strength between the i - th and j - th agents is affected by the relative distance. To account for communication constraints, the element a i j in the adjacency matrix A is adjusted accordingly. The modified matrix elements represent values in continuous space, and a i j can be expressed as follows:
a i j x i j t = 1 1 2 1 + cos ( π x i j t R τ 1 τ ) 0     x i j ( t ) [ 0 , τ R   x i j ( t ) [ τ R , R   x i j ( t ) [ R , i n f
where x i j is the relative position vector between the i th agent and the j th agent, τ is defined in this paper as the communication-limited decay rate, R is the farthest communication boundary, and is a 2-norm indicating the distance between the two. The modified adjacency matrix is defined as A ¯ , and the elements of the matrix are a i j . The Laplace matrix is L ¯ , where its elements are continuous functions of time and correspond to a undirected graph. In graph theory, a path is a finite-length sequence of vertices starting from any vertex and consisting of neighbors of edges. If a path exists between the i - th agent and the j - th agent, it implies connectivity between the agents.

2.2. Multi-Consensus Theory

In a heterogeneous multi-agent system with n agents, multiple agents are considered to have reached consensus if the states of the agents reach smooth agreement with each other, and the consensus agreement between the i - th and j - th agents is defined as follows:
lim t   x i ( t ) x j ( t ) = 0 lim t   v i ( t ) = lim t   v j ( t ) = 0
This formula implies that when the position deviations and velocities among the agents in the formation align, the system achieves consensus, enabling coordinated formation behavior. When a group of intelligences achieves distinct consensus states, the multi-intelligent-agent system completes the grouping and can realize a multi-objective task. If there is a need to generalize the methods of this section and states, such as position velocity, to multiple dimensions, the Crowe inner product can be used.
The Laplace matrix L ^ under multiple consensus is given below:
L ^ = L 11 ω 1 ω 2 L 12 ω 1 ω n L 1 n ω 2 ω 1 L 21 L 22 ω 2 ω n L 2 n ω n ω 1 L n 1 ω n ω 2 L n 2 L n n
where ω i denotes the intelligence degree of the multi-intelligence agent, and L i j denotes the element of the Laplace matrix for ( i , j ) . In a multi-intelligent-agent system, if two agents share the same intelligence degree, they can achieve consensus, satisfying the above equation when ω i = ω j , the intelligent agent i and the intelligent agent j satisfy the above equation. In other words, under different degrees of intelligence, agents with varying intelligence degrees can achieve multiple consensus states and form distinct groups. Therefore, multi-intelligent-agents can be grouped into clusters of varying sizes and configurations by assigning appropriate degrees of intelligence to different goals when performing multi-objective tasks. Multiple consensus can be achieved more conveniently through intelligence degree assignments compared to traditional protocols that align all agents to the same state.

2.3. Stability Theory

The Lyapunov stability theory is an important control theory which analyzes system stability through state equations and energy consumption processes. Among them, the indirect method determines the stability of the system by analyzing the equilibrium points of the system equations and the distribution of the eigenvalues of the linear equation of state in the complex plane. The direct method, on the other hand, evaluates stability by defining an energy function and demonstrating its negative definiteness with the choice of method depending on specific application requirement.
The obstacle avoidance potential function is designed as follows:
G 1 x i j = k 1 x i j r o u t     g 1 s d s , x i j r i n   , r o u t 0 ,   otherwise  
where:
g 1 x i j = 1 2 1 + cos π x i j r i n r out   r i n
Then, the collision avoidance control for agent i at x i is defined as follows:
u i q = j N i q   x i G 1 x i j
where x i is the gradient along x i .
In the collision avoidance area, the relative distance between the two agents is less than r out . The smaller the relative distance, the greater the input of the control law. The MAS can comply collision avoidance.
  • The connectivity preserving potential function is designed as follows:
G 2 x i j = k 2 x ρ i j     g 2 s d s , x i j r o u t , R 0 , otherwise
where:
g 2 x i j = 1 2 1 + cos π x i j r o u t d i j r o u t , x i j r o u t , ρ i j 1 + cos π x i j R R ρ i j , x i j ρ i j , R
where ρ i j = ρ i ρ j refers to the required relative position. ρ i refers to the target state of the agent i when forming a formation shape.
Then, the connectivity preservation control input of agent i can be designed as follows:
u i l = j N i l   x i G 2 x i j
When x i j > R . This design allows the connection to be broken between agents if necessary. Therefore, the network can be flexibly changed to achieve the desired formation.
  • Another point to note, the agent spacing of the expected formation should be set between r o u t , R . In other words, the states of agent i and agent j need to satisfy the following equation: 0 < r o u t ρ i j R , i , j V , ω i = ω j , i j .
In this paper, we propose decentralized multi-consensus control methods with collision avoidance and connectivity preservation:
u i = ω i 2 j N i t   x i G 1 x i j ω i 2 j N i l   x i G 2 x i j α j N i   S ε i ρ , ε j ρ , v m β v i
where:
S ε i ρ , ε j ρ , v m v m ε i ρ ω i ω j ε j ρ ε i ρ ω i ω j ε j ρ ε i ρ ω i ω j ε j ρ   ε i ρ ω i ω j ε j ρ > v m   ε i ρ ω i ω j ε j ρ > v m
where v m is the maximum safe speed ε k p = x k ρ k , k = 1,2 , , n , and α , β > 0 . When the subgroup reaches the target, v i = 0 .
For the multi-consensus formation-grouping control law proposed in the previous section, the Lyapunov function is defined as follows:
V =   i = 1 N     j N i q     G 1 x i j + j N i l     G 2 x i j + 1 ω i 2 α 0 t     S ε i ρ , ε j ρ , v m T d ε i ρ + i = 1 n     v i ω i 2
Proving stability in the formation-group control of multi-intelligent-agent systems requires consideration of two cases. The first case involves a static network of connections, while the second addresses dynamic networks. When communication is restricted, the topology of the undirected graph changes over time. In the case of target formation change, changes in network topology are segmented and continuous. To address the above situation, this paper analyzes the Lyapunov function of the dynamic topology graph G and establishes stability proofs.
When the control input of the system is a multi-consensus control law, the following three conditions need to be satisfied to achieve formation targets for arbitrary numbers of agents in both dynamically and statically connected networks during the formation process:
Intelligence level needs to meet the following: ω i 0 , i V ;
  • The initial value of the Lyapunov V ( 0 ) is a constant;
  • The relationship between neighboring agents needs to satisfy 0 < r out ρ i j R , i , j V , ω i = ω j , i j .
The process of proving stability begins with calculating the connectivity components of the Lyapunov function.
j N i q   G ˙ 1 x i j = x ˙ i T j N i q   x i G 1 x i j
where x ˙ i T = v i T and there are as follows:
0 t   S ε i ρ , ε j ρ , v m T d ε i ρ = 0 t   S ε i ρ , ε j ρ , v m T v i d τ
Based on the above Equation (8), and the Lyapunov function (Equation (9)) can be transformed into the following:
V ˙ = i = 1 N   x ˙ i T j N i q   x i G 1 x i j + x ˙ i T j N i q     x i G 2 x i j + α 1 ω i 2 S ε i ρ , ε j ρ , v m T v i + v ˆ i v ˆ i ˙
In Equation (10), the following equation exists:
i = 1 n   v ˆ i v ˆ i ˙ = i = 1 n   v ˆ i u i ω i
where u ˆ = u 1 ω 1 , u 2 ω 2 , , u n ω n T , then V ˙ can be written as follows:
V ˙ = i = 1 N   v i T j N i q     x i G 1 x i j + v i T j N i l     x i G 2 x i j + α 1 ω i 2 S ε i ρ , ε j ρ , v m T v i + v ˆ i u i ω i
In the above equation, v = v 1 , v 2 , , v n T , define Λ d i a g ω 1 , ω 2 , , ω n and there are the following:
G 1 = j N i q     x 1 G 1 x 1 j , j N i q     x 2 G 1 x 2 j , , j N i q     x n G 1 x n j T
then:
S = s x 1 ρ , x j ρ , v m , s x 2 ρ , u j ρ , v m , , s u m ρ , ε j p , v m
According to the multi-consensus control law given in the previous section, we derive the following:
V ˙ = v T G 1 + v T G 2 + α Λ 2 v T S + v ˆ T u ^
One can easily obtain the equation u ^ = Λ 1 u , v ^ = Λ 1 v , the multi-consensus control law u can be alternatively written as follows:
u = Λ 2 G 1 Λ 2 G 2 α S β v
Bringing Equation (16) in the multi-consensus control law into the derivative of the Lyapunov function, one obtains the following:
V ˙ = v T G 1 + ν T G 2 + α Λ 2 v T S + ν ˆ T Λ 1 Λ 2 G 1 Λ 2 G 2 α S β v = α Λ 2 v T S α v ˆ T Λ 1 S β v ˆ T v ˆ = β v ˆ T v ˆ 0
To prove the determination of the Lyapunov stability of a control law requires demonstrating two aspects: first, that the Lyapunov function VV is semi-positive definite. The second point is that the derivative of the Lyapunov function V ˙ is negative semi-definite, as demonstrated by Equation (17). In this paper, the initial value of the Lyapunov function V 0 is constant, and V remains a positive constant throughout.
V ( t ) V ( 0 ) = Θ ,   t 0
The Lyapunov function needs to satisfy two threshold constraints for connectivity preservation:
V ( t ) G 1 r i n V ( t ) G 2 ( R )
Let V 0 = V ( 0 ) , then:
G 1 = r i n r o u t   g 1 s d s G 2 = r i n R   g 2 s d s
The maximum value of the connectivity function is G 1 m a x = k G 1 , G 2 m a x = k G 2 , then:
V 0 k G 1 V 0 k G 2
In order to ensure the connectivity during the formation grouping process, it is necessary to set k to a suitable value that satisfies the above inequality, and let Γ = { ( x ( t ) , v ( t ) ) V ( t ) V ( 0 ) ,   t 0 } , according to the Lasalle invariance principle, when t , each solution from Γ is close to the maximal invariant set W = { ( x ( t ) , v ( t ) ) Γ V ˙ ( t ) = 0 } . In Equation (20), V ˙ = 0 is satisfied when and only when v ˆ i = 0 , i V . When V ˙ = 0 , v 1 = v 2 = = v n = 0 . This indicates that all intelligences reach the same velocity, and it is the velocity of intelligences that reaches a stable consensus.
When t , u i can be simplified as:
u i = v ˙ i = ω i 2 j N i q     x i G 1 x i j ω i 2 j N i l     x i G 2 x i j α j N i     S ε i ρ , ε j ρ , v m = 0
thus:
u = α L ^ x = α Λ L x ˆ = 0 n
We expect the control input u i = 0 only when a single agent in a multi-agent system achieves multi-consensus. The above procedure is a stability proof in static networks, and we need to extend Lyapunov’s stability proof to the case where the network topology is dynamic. In this paper, we define the network topology graph as G ( t ) , where moment t is the Laplace matrix. L G ( t ) .   ω i varies with the dynamics of the subgroups assigned by the intelligences. Therefore, the multi-consensus control law is a segmented continuous function. The control input function jumps when the network topology graph of the target formation changes.
Next, this paper analyzes the stability of the dynamic network in this context for the above system. In this paper, the concept of dwell time is utilized to analyze the dynamic network. In switched systems, the concept of minimum dwell time (MDT) is employed in this paper. In the MDT concept, if the subsystems before and after switching are a progressively stable system, then the switching system is also a progressively stable system. We assume that the network changes at moment t k and the next moment of change is moment t k + 1 . But the network topology is fixed from t k to t k + 1 , and this time interval is large enough. In this paper, the residence time is defined as τ ( t ) , which refers to the time interval between the network change moments, and is specified as follows:
τ t k = t k + 1 t k , k N
In this paper, the minimum residence time is defined as τ d , and the minimum dwell time τ d is sufficiently large to guarantee the stability of the dynamic system. All time intervals of network topology switching are larger than the minimum dwell time, τ t k τ d > 0 . Thereafter, the set of all possible network topology maps in the multi-intelligent-agent system is G c . If the initial topology map of the multi-intelligent-agent system is an element in G c , the transformed topology map is still in G c under the action of the control law, thus G 0 G c and then G t G c , t 0 .
Since τ d τ t k and τ d are large enough, the network topology of the multi-intelligent-agent system achieves asymptotic stability starting from G 0 . If the network topology G t at a given moment achieves asymptotic stability, the intermediate switching system also maintains stability, ensuring the overall multi-agent system remains stable in dynamic networks. It is worth noting that the multi-consensus grouping control law proposed in this paper relies on a topological network constraint, ensuring that the initial network of the multi-agent system is fully connected. In addition, the grouping control algorithm in this paper allows for network disconnections between the subgroups. Although agents within a subgroup may not directly communicate with one another, each subgroup can reach formation grouping based on intelligence ω .

3. Modeling and Analysis

3.1. Problem Analysis in Complex Environments

Obstacle avoidance and formation control are crucial and challenging tasks in complex environments. Such environments typically encompass densely structured areas such as urban high-rise buildings, narrow alleyways, and other densely structured areas. The characteristics of these environments make it necessary for agents to possess advanced capabilities in autonomous perception, decision-making, and control.
Maintaining a stable formation structure in complex environments is a challenge. Agents in a formation must constantly adjust their positions to adapt to environmental dynamics while preserving relative positions within the formation. Agents within the formation must collaborate and effectively distribute tasks to enhance efficiency. This requires the multi-intelligent system to exhibit autonomy, synergy, and robustness, thereby improving overall performance and mission execution of the system through advanced control strategies and algorithms.

3.2. Formation Control Modeling

M agents are assigned leadership roles in each formation. The leader is responsible for planning and dynamically updating the formation’s trajectory to ensure obstacle avoidance. The leader usually needs to be equipped with more powerful sensors and processors to better sense the environment and perform path planning. The remaining agents follow the leader. They can be divided into different formations, with the intelligences within each formation maintaining safe spacing and relative positions under the leader’s guidance. The followers can adjust their speed and direction according to the leader’s instructions.
This paper examines a heterogeneous multi-agent system comprising N followers and M leaders, i = 1,2 , , N , N + 1 , , N + M , respectively. The relationship between leaders and followers is represented by an undirected topological graph G = { V , E , A } , where the dynamical equations of the leaders satisfy the following form:
x ˙ i ( t ) = v i ( t ) v ˙ i ( t ) = u i 0 ( t ) , i M .  
where x i t R M and v i ( t ) R M represent the position and velocity states of the leader. u i 0 ( t ) R denotes the leader’s control input.
For the second-order heterogeneous multi-intelligence system, assume that the system consists of N 1 first-order agents and N 2 second-order agents, satisfying N 1 + N 2 = N , and N 1 and N 2 are positive constants, where the kinetic equations of the first-order and second-order followers satisfy the following form:
x ˙ i t = u i t + f i x i + d i t , i N 1
x ˙ i ( t ) = v i ( t ) , v ˙ i ( t ) = u i ( t ) + d i ( t ) ,   i N 2
where x i ( t ) R N 1 is the position information of the first-order agent, x i ( t ) R N 2 , v i ( t ) R N 2 represent the position and velocity states of second-order agent, u i ( t ) R is the control input of the first-order and second-order intelligences, and d i ( t ) is the external perturbation.
Definition 3
([19]). A heterogeneous multi-agent system defined by Equations (25)–(27) achieves preset time-inclusive control consistency if it satisfies Equation (28):
l i m t t f   x i ( t ) j = N + 1 N + M     h i j x j = 0 , i N + M , x i ( t ) j = N + 1 N + M     h i j x j = 0 , t > t f , l i m t f f   v i ( t ) j = N + 1 N + M     h i j v j = 0 , i N + M , v i ( t ) j = N + 1 N + M     h i j v j = 0 , t > t f .
where  t f  is the preset time for the multi-intelligent-agent system to reach containment consistency, the convergence time  t f  can be given arbitrarily and is not affected by the initial value of the system and the control parameters, when  t > t f , each follower can converge at the preset time.

3.3. Heterogeneous Multi-Intelligent-Agent Formation Collaborative Obstacle Avoidance Controller

3.3.1. Controller Design

In the controller design of this paper, the control input u i consists of the base control input u i j , the obstacle avoidance control input u i v , and the velocity inhibition term V C . The base control input u i j is used to adjust the position of the intelligent agent, bringing it closer to the target position. Setting the target position as x i d , we can define the control input as follows:
u i j = K p x i t x i d t K v v i t
where K p and K v are the gain matrices for position and velocity, respectively. According to the synergistic goal requirement, it can be rewritten as follows:
u i j = α X C ( : , i ) W ( i ) W ( j ) X C ( : , j )
where X C is the difference between the current position and the target position.
The obstacle avoidance control input u i v is used to prevent collisions between agents. Letting the distance between the agents be d i j = x i x j , we can design the obstacle avoidance control input as follows:
u i v = k i v 1 d i j d m i n x i x j d i j d i j < d m i n 0 d i j d m i n
where d m i n is the minimum distance for obstacle avoidance, and k i v is the obstacle avoidance gain. According to the obstacle avoidance system, the objective can be rewritten as follows:
u i v = 0.5 1 + cos π d i j s s s i + 1 X C : , i X C : , j X C : , i X C : , j d i j s i , s u i v = 0.5 1 + cos π R d i j R R p + 1 X C : , i X C : , j X C : , i X C : , j d i j R p , R
where s and s i are thresholds for obstacle avoidance distances, and R p and R are other distance thresholds.
In order to limit the speed of the agents, it is necessary to introduce a speed inhibition term V C :
V C = K v v i
where K v is the speed inhibition gain.
This results in the following control input expression:
u i = u i j u i v β V C : , i
where u i is the control input, u i j is the base control input, u i v is the obstacle avoidance control input, and V C ( : , i ) is the velocity inhibition term. The parameter β is a constant used to control the magnitude of the velocity term’s influence in the control law. Substitution yields the following:
u i = α X C : , i W i W j X C : , i 0.5 1 + cos π d i j s s s i + 1 X C : , i X C : , j X C : , i X C : , j K v v i d i j s i , s u i = α X C : , i W i W j X C : , i 0.5 1 + cos π R d i j R R p + 1 X C : , i X C : , j X C : , i X C : , j K v v i d i j R p , R
Next, a time-varying queueing-based packet consistency protocol is proposed:
u i = K 1 θ i h i + K 2 v i N 1 i a i j θ j h j θ i h i + v j N 2 i a i j θ j h j + h ˙ i v i N K 1 θ i h i + K 2 v i N 1 i a i j θ j h j + v j N 2 i a i j θ j h j θ i h i + h ˙ i v i N + 1
The relationship between K 1 and K 2 satisfies K = K 1 = K 2 = k 1 , k 2 , which are the controller parameters to be designed. Assume that the system consists of N + M agents, where the state of each agent is denoted by θ i ( t ) , with i = 1 ,   2 ,   . . . ,   N + M .
  • Let θ t = θ 1 T t θ 2 T t θ N T t θ N + 1 T t θ N + M T t T be the set of state geometries of the smart agent, then h t = h 1 T t h 2 T t h N T t h N + 1 T t h N + M T t T is the set of formation queueing functions, where h σ i ( t ) = h i ( t ) = h i x ( t )   h i v ( t ) T . The position component of h t is   h x t = h x t h 2 x t h n x t h n + 1 x t h n + m x t T , and the velocity component is h v t = h v t h 2 v t h n v t h n + 1 v t h n + m v t T . The goal of the system is to make all agents’ states θi(t) converge to a common trajectory hi(t), i.e., l i m t θ i ( t ) h i ( t ) = 0 .
The system is then rewritten according to the protocol as follows:
θ ˙ ( t ) = I N + M B 2 K + B 1 B 2 T L B 2 K θ ( t ) I N + M B 2 K L B 2 K h ( t ) + I N + M B 2 h ˙ v ( t )
where B 1 and B 2 are system matrices, L is the Laplacian matrix, and K is the control gain matrix.

3.3.2. Proof of Stability

The adjacency matrix A describes the connectivity between agents, determined by the Euclidean distance between agents:
A ( i , j ) = 1   d i j < t R 0.5 1 + c o s π d i j t R R t R 0   d i j > t R   t R < d i j < R
where d i j represents the distance agents i and j , and t R and R are distance thresholds.
  • Definition of degree matrix D :
D ( i , i ) = j = 1 k   A ( i , j )
Definition of Laplace matrix L c :
L c = D A
Normalize the Laplace matrix L c :
L ( i , j ) = W ( i ) W ( j ) L c ( i , j )
where W ( i ) is the weight associated with the i th intelligence.
  • The Lyapunov function is chosen as a quadratic function of the agent states:
V x = 1 2 i = 1 M + N   x i x i d 2 + v i 2
Calculate the time derivative of the Lyapunov function V ( x ) :
V ˙ ( x ) = i = 1 M + N   x i x i d x ˙ i + v i v ˙ i
Since x ˙ i = v i and v ˙ i = u i , therefore the following:
V ˙ ( x ) = i = 1 M + N   x i x i d v i + v i u i
Substituting into the control input yields:
V ˙ x = i = 1 M + N x i x i d T v i + v i T α X C : , i W i W j X C : , i 0.5 1 + cos π d i j s s s i + 1 X C : , i X C : , j X C : , i X C : , j K v v i d i j s i , s V ˙ x = i = 1 M + N x i x i d T v i + v i T α X C : , i W i W j X C : , i 0.5 1 + cos π R d i j R R p + 1 X C : , i X C : , j X C : , i X C : , j K v v i d i j R p , R
Simplify to obtain the following:
V ˙ x = i = 1 M + N x i x i d T v i K p x i x i d T v i K v v i 2 = i = 1 M + N K p x i x i d T v i K v v i 2
Since K p and K v are positive, each term of V ˙ x is nonpositive:
V ˙ x = K v i = 1 M + N v i 2
K v i = 1 M + N v i 2 shows that the sum of the squared velocities remains non-negative. The system is asymptotically stable since V ˙ x < 0 and the velocity term in V ˙ x contributes to the reduction in the system’s kinetic energy.

4. Experiment and Simulation

4.1. Simulation Verification of System Consistency

The consistency of the constructed multi-intelligent-agent system is simulated and verified as a first step. Among them, agents 1–6 are modeled as first-order systems, while agents 7–10 are second-order systems. This structure is designed to simulate the interaction and coordination between multiple agents operating in complex environments.
For the model construction, the kinematic and dynamic models are defined of each agent and its interconnection mechanism. The state of each first-order intelligence is represented by its position and velocity, while the second-order intelligence introduces acceleration as its state variable on the basis of the first-order intelligence. The leader agent sets the target position and transmits positional information to the followers, who adjust their own motion state by receiving and processing this information to achieve consensus.
In the experimental process, MATLAB2022b is used to construct the simulation environment for the proposed multi-agent system. In order to verify the behavior of the system in a dynamic environment, a series of experimental scenarios are designed, encompassing both fixed-target and dynamic-target cases. In each experiment, the system records position, velocity, acceleration, and other state parameters of each intelligent agent and observes system convergence iteratively.
From Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, the heterogeneous multi-agent system achieves a steady state at the predefined time point t = 5   s , and each agent accurately follows the leader’s state information. The experimental results show that the follower agents progressively converge toward the leader’s target position, and the system demonstrates effective collaboration and consistency.
Through the above simulation verification, it can be concluded that the constructed multi-agent system achieves consistency under the specified conditions, providing a robust theoretical and experimental foundation for subsequent research.

4.2. Formation and Obstacle Avoidance Performance Simulation Verification

The design and validation of formation control algorithms usually rely on simulations to evaluate algorithm performance across varying conditions, ensuring that agents can achieve the desired formation shapes as expected. Different formation shapes offer different advantages in performing specific tasks. For example, a grid or line formation covers larger areas, making it ideal for search-and-rescue operations, whereas a ring or wedge formation is well suited for surveillance and reconnaissance, offering enhanced situational awareness and protection. Figure 6 illustrates the formation shapes of the multi-agent system in the simulation. The experiment validates the formation control algorithm by simulating dynamic formation process. In our simulations, we assume ideal conditions where the reference trajectories are smooth, and the disturbances acting on the system are bounded. To simulate the effect of external disturbances, we introduce small random position perturbations at the initial setup, representing factors like wind. However, the range of these disturbances is limited, and we assume that the disturbances are small enough for the system to adapt and ultimately form the desired formation. These assumptions provide a clear understanding of how the system behaves under ideal conditions. Table 2 provides the relevant parameters used in the simulation experiments.
To validate the formation control and obstacle avoidance performance of a heterogeneous multi-agent system in complex environments, the simulation involves ten agents, comprising six first-order and four second-order agents. Obstacles are set up in the simulation environment to replicate challenges found in complex urban environments and evaluate the agents’ path-planning and formation-keeping capabilities.
The simulation results in Figure 7 illustrate the evolution of the formation shape over time. From the figure, it can be observed that the agents initially form a circular formation and subsequently adjust to a straight-line and triangular formation based on the operational requirements for navigating gaps and avoiding obstacles. In this study, first-order agents use a simplified dynamics model, which is able to form a stable triangular formation and move along the edges of obstacles when it encounters them, thus keeping the formation stable. The second-order intelligent agent has a more complex dynamics model, allowing for finer motion control. When encountering narrow gaps in tall buildings, second-order agents can precisely navigate straight paths through narrow gaps between obstacles, thereby preserving formation stability. Figure 8 demonstrates the real-time consensus dynamics of first-order and second-order agents. These change processes show that the formation shape adjustments and control strategies are effectively implemented in a dynamic environment.
To further validate the effectiveness of the proposed formation control and obstacle avoidance strategies, this simulation introduces dynamically changing obstacle scenarios and expands the number of agents to 25. These agents are divided into four groups (Group 1 to Group 4), where each group maintains formation stability while avoiding obstacles. The simulation results in Figure 9 demonstrate that the multi-agent system can effectively avoid obstacles and maintain formation stability in dynamic environments. The system exhibits excellent path-planning capabilities and coordination in dynamic conditions. Each group collaborates to construct the initial formation and successfully avoids obstacles in dynamic environments while maintaining intra-group consistency. The dynamic obstacles in the simulation include moving geometries such as triangular, rectangular, and elongated shapes, which represent different types of obstacles commonly encountered in complex environments. Specifically, moving triangular obstacles simulate sharp-edged objects such as traffic signs, building roof peaks, or rocky outcrops in mountainous regions. Rectangular obstacles represent objects like building facades, billboards, or large transport vehicles that are commonly found in urban settings. Lastly, elongated obstacles are used to model long structures such as bridges, power lines, or cargo transportation vehicles that may be encountered during flight in urban or mountainous areas. The simulation results show that the heterogeneous multi-agent system can effectively achieve formation control in complex environments, and first-order agents and second-order agents each show their advantages in different tasks. Specifically, first-order agents excel at maintaining formation stability and avoiding obstacles, while the second-order agents excel in path traversal and precise control. These results indicate that the formation control strategy proposed in this study can not only effectively address challenges in complex environments but also contribute significantly to practical applications, providing substantial support for deploying multi-agent systems in real-world complex environments.

5. Conclusions

The research presented in this paper demonstrates significant advancements in formation control and obstacle avoidance for UAV-based multi-agent systems. These results enhance the agility and adaptability of multi-agent systems in highly dynamic and complex environments while improving the system’s robustness and reliability. These findings are of great significance in promoting the application of UAVs in fields such as rescue, inspection, logistics, and more. Particularly in urban environments, the complex terrain and high building density demand greater collaboration capabilities and autonomous decision-making from multi-agent systems. This research addresses obstacle avoidance challenges while ensuring formation stability, offering a robust solution for UAVs operating in urban environments.
In addition, this research introduces novel concepts and methods for designing control strategies for heterogeneous multi-agent systems. The constituent agents of a heterogeneous multi-agent system exhibit diverse capabilities and performance, making their effective coordination critical for achieving optimal system performance. The formation control strategy proposed in this study optimizes system performance by analyzing the characteristics and capabilities of different agents and assigning tasks accordingly. This strategy can be applied to a broader spectrum of heterogeneous multi-agent systems, offering novel insights and methodologies for advancing research and development in related domains.

Author Contributions

Conceptualization, X.C.; methodology, J.J.; validation, Y.Y.; formal analysis, Z.Z.; project administration, H.C. and W.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation (NSF) of China (No. 62303380).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cordeiro, T.F.K.; Ishihara, J.Y.; Ferreira, H.C. A Decentralized Low-Chattering Sliding Mode Formation Flight Controller for a Swarm of UAVs. Sensors 2020, 20, 3094. [Google Scholar] [CrossRef] [PubMed]
  2. Cappello, D.; Mylvaganam, T. Distributed Control of Multi-Agent Systems via Linear Quadratic Differential Games with Partial Information. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 4565–4570. [Google Scholar]
  3. Tan, C.; Cui, Y.; Li, Y. Global Consensus of High-Order Discrete-Time Multi-Agent Systems with Communication Delay and Saturation Constraint. Sensors 2022, 22, 1007. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, M.; Li, W. Distributed adaptive control for nonlinear multi-agent systems with nonlinear parametric uncertainties. Math. Biosci. Eng. 2023, 20, 12908–12922. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, L.; Li, C.; Guo, Y.; Ma, G.; Li, Y.; Xiao, B. Formation–containment control of multi-agent systems with communication delays. ISA Trans. 2022, 128, 32–43. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, Y.; Li, X.; Wang, L. Distributed H∞ consensus of heterogeneous multi-agent systems with nonconvex constraints. ISA Trans. 2022, 131, 160–166. [Google Scholar] [CrossRef] [PubMed]
  7. Li, Z.; Chen, M.; Ding, Z. Distributed adaptive controllers for cooperative output regulation of heterogeneous linear multi-agent systems with directed graphs. Automatica. 2016, 68, 179–183. [Google Scholar] [CrossRef]
  8. Lu, M.; Wu, J.; Zhan, X.; Han, T.; Yan, H. Consensus of second-order heterogeneous multi-agent systems with and without input saturation. ISA Trans. 2022, 126, 14–20. [Google Scholar] [CrossRef] [PubMed]
  9. Park, B.S.; Yoo, S.J. An Error Transformation Approach for Connectivity-Preserving and Collision-Avoiding Formation Tracking of Networked Uncertain Underactuated Surface Vessels. IEEE Trans. Cybern. 2018, 49, 2955–2966. [Google Scholar] [CrossRef] [PubMed]
  10. Xiong, S.; Wu, Q.; Wang, Y.; Chen, M. An l_2-l_∞ distributed containment coordination tracking of heterogeneous multi-unmanned systems with switching directed topology. Appl. Math. Computation. 2021, 404, 126080. [Google Scholar] [CrossRef]
  11. Hua, Y.; Dong, X.; Hu, G.; Li, Q.; Ren, Z. Distributed Time-Varying Output Formation Tracking for Heterogeneous Linear Multiagent Systems With a Nonautonomous Leader of Unknown Input. IEEE Trans. Autom. Control. 2019, 64, 4292–4299. [Google Scholar] [CrossRef]
  12. Duan, J.; Zhang, H.; Liang, Y.; Cai, Y. Bipartite finite-time output consensus of heterogeneous multi-agent systems by finite-time event-triggered observer. Neurocomputing 2019, 365, 86–93. [Google Scholar] [CrossRef]
  13. Duan, J.; Duan, G.; Cheng, S.; Cao, S.; Wang, G. Fixed-time time-varying output formation–containment control of heterogeneous general multi-agent systems. ISA Trans. 2023, 137, 210–221. [Google Scholar] [CrossRef] [PubMed]
  14. Hua, Y.; Dong, X.; Li, Q.; Ren, Z. Distributed fault-tolerant time-varying formation control for high-order linear multi-agent systems with actuator failures. ISA Trans. 2017, 71, 40–50. [Google Scholar] [CrossRef] [PubMed]
  15. Lan, J.; Liu, Y.-J.; Yu, D.; Wen, G.; Tong, S.; Liu, L. Time-Varying Optimal Formation Control for Second-Order Multiagent Systems Based on Neural Network Observer and Reinforcement Learning. IEEE Trans. Neural Networks Learn. Syst. 2022, 35, 3144–3155. [Google Scholar] [CrossRef] [PubMed]
  16. Mo, L.; Guo, S.; Yu, Y. Mean-square H∞ antagonistic formations of second-order multi-agent systems with multiplicative noises and external disturbances. ISA Trans. 2020, 97, 36–43. [Google Scholar] [CrossRef] [PubMed]
  17. Zhao, H.; Park, J.H. Group consensus of discrete-time multi-agent systems with fixed and stochastic switching topologies. Nonlinear Dyn. 2014, 77, 1297–1307. [Google Scholar] [CrossRef]
  18. Gong, J.; Jiang, B.; Ma, Y.; Mao, Z. Distributed Adaptive Fault-Tolerant Formation–Containment Control With Prescribed Performance for Heterogeneous Multiagent Systems. IEEE Trans. Cybern. 2022, 53, 7787–7799. [Google Scholar] [CrossRef] [PubMed]
  19. Choi, J.; Song, Y.; Lim, S.; Oh, H. Decentralized Multiple V-Formation Control in Undirected Time-Varying Network Topologies. In Proceedings of the 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), Cranfield, UK, 25–27 November 2019; pp. 278–286. [Google Scholar]
Figure 1. First-order agent consensus state trajectory.
Figure 1. First-order agent consensus state trajectory.
Drones 09 00175 g001
Figure 2. Velocity of first-order intelligences over time.
Figure 2. Velocity of first-order intelligences over time.
Drones 09 00175 g002
Figure 3. Position of second-order intelligences over time.
Figure 3. Position of second-order intelligences over time.
Drones 09 00175 g003
Figure 4. Velocity versus time for second-order intelligences.
Figure 4. Velocity versus time for second-order intelligences.
Drones 09 00175 g004
Figure 5. Acceleration versus time for second-order intelligences.
Figure 5. Acceleration versus time for second-order intelligences.
Drones 09 00175 g005
Figure 6. Simulated multi-intelligent-agent formation.
Figure 6. Simulated multi-intelligent-agent formation.
Drones 09 00175 g006
Figure 7. Simulation results of formation grouping control for multi-intelligent-agent system.
Figure 7. Simulation results of formation grouping control for multi-intelligent-agent system.
Drones 09 00175 g007
Figure 8. Real-time change curves of one-order and two-order multi-consensus results.
Figure 8. Real-time change curves of one-order and two-order multi-consensus results.
Drones 09 00175 g008
Figure 9. Simulation results of formation control for 25 multi-intelligent body systems.
Figure 9. Simulation results of formation control for 25 multi-intelligent body systems.
Drones 09 00175 g009
Table 1. Limitations of existing methods.
Table 1. Limitations of existing methods.
MethodCentralized ControlDistributed ControlProposed Method
Scalabilitynot suitable for large-scale systemsbetter than centralized but still limitedscalable to large multi-agent systems
Fault Tolerancesingle point of failureresilient to some faultsrobust against faults and jamming
Flexibilityrequires centralized coordination)agents work independently but with fixed rulesdynamic subgrouping and real-time adaptability
System Stabilityprecise controldifficult to guarantee under changing conditionsensures stability with time-varying topologies
Complexity of Environmentstruggles with dynamic and complex environmentscan handle complex environments but with limited real-time responseeffective in complex and dynamic environments (urban flight, mountainous terrain)
Table 2. Parameters for numerical simulations.
Table 2. Parameters for numerical simulations.
ParameterValue
Communication   boundary ,   R 50 [m]
Formation - limit   boundary ,   R 40 [m]
Exclusion   outer   boundary ,   r o u t 5 [m]
Exclusion   inner   boundary ,   r i n 0.5 [m]
Potential   function   gain ,   ( k 1   , k 2   )(8, 2)
Control   parameters ,   ( α , β )(1.5, 1.5)
Intelligence   degree ,   ω (1, 1.1, 1.2)
Communication   attenuation   rate ,   τ 0.6
Connectivity   lower   bound ,   ϵ 0.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, X.; Yang, Y.; Zhang, Z.; Jiao, J.; Cheng, H.; Fu, W. Consensus-Based Formation Control for Heterogeneous Multi-Agent Systems in Complex Environments. Drones 2025, 9, 175. https://doi.org/10.3390/drones9030175

AMA Style

Chang X, Yang Y, Zhang Z, Jiao J, Cheng H, Fu W. Consensus-Based Formation Control for Heterogeneous Multi-Agent Systems in Complex Environments. Drones. 2025; 9(3):175. https://doi.org/10.3390/drones9030175

Chicago/Turabian Style

Chang, Xiaofei, Yiming Yang, Zhuo Zhang, Jiayue Jiao, Haoyu Cheng, and Wenxing Fu. 2025. "Consensus-Based Formation Control for Heterogeneous Multi-Agent Systems in Complex Environments" Drones 9, no. 3: 175. https://doi.org/10.3390/drones9030175

APA Style

Chang, X., Yang, Y., Zhang, Z., Jiao, J., Cheng, H., & Fu, W. (2025). Consensus-Based Formation Control for Heterogeneous Multi-Agent Systems in Complex Environments. Drones, 9(3), 175. https://doi.org/10.3390/drones9030175

Article Metrics

Back to TopTop