Next Article in Journal
Real-Time Modeling of Static, Dynamic and Mixed Eccentricity in Permanent Magnet Synchronous Machines
Previous Article in Journal
Modeling and Analysis of the Turning Performance of an Articulated Tracked Vehicle That Considers the Inter-Unit Coupling Forces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fuzzy Flocking Control for Multi-Agents Trapped in Dynamic Equilibrium Under Multiple Obstacles

by
Weibin Liang
1,2,
Xiyan Sun
1,2,*,
Yuanfa Ji
1,
Xinyi Liu
1,
Jianhui Wu
1,3 and
Zhongxi He
1
1
School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China
2
International Joint Laboratory for Spatiotemporal Information and Intelligent Location Services, Guilin 541004, China
3
School of Information Science and Technology, Hunan Institute of Science and Technology, Yueyang 414006, China
*
Author to whom correspondence should be addressed.
Machines 2025, 13(2), 119; https://doi.org/10.3390/machines13020119
Submission received: 12 December 2024 / Revised: 23 January 2025 / Accepted: 2 February 2025 / Published: 4 February 2025
(This article belongs to the Section Automation and Control Systems)

Abstract

:
The Olfati-Saber flocking (OSF) algorithm is widely used in multi-agent flocking control due to its simplicity and effectiveness. However, this algorithm is prone to trapping multi-agents in dynamic equilibrium under multiple obstacles, and dynamic equilibrium is a key technical issue that needs to be addressed in multi-agent flocking control. To overcome this problem, we propose a dynamic equilibrium judgment rule and design a fuzzy flocking control (FFC) algorithm. In this algorithm, the expected velocity is divided into fuzzy expected velocity and projected expected velocity. The fuzzy expected velocity is designed to make the agent escape from the dynamic equilibrium, and the projected expected velocity is designed to tow the agent, bypassing the obstacles. Meanwhile, the sensing radius of the agent is divided into four subregions, and a nonnegative subsection function is designed to adjust the attractive/repulsive potentials in these subregions. In addition, the virtual leader is designed to guide the agent in achieving group goal following. Finally, the experimental results show that multi-agents can escape from dynamic equilibrium and bypass obstacles at a faster velocity, and the minimum distance between them is consistently greater than the minimum safe distance under complex environments in the proposed algorithm.

1. Introduction

Flocking is a cluster composed of a large number of agents, enabling the system to achieve overall coordination through the behaviors of individual agents [1,2,3]. To date, there have been a lot of studies on the control of coordination behaviors generated by agents, including flocking control [4,5], consensus control [6,7], and formation control [8,9], to name a few.
Flocking control is the basis for investigating consensus control and formation control. Olfati-Saber [10] proposed a flocking control algorithm. In this algorithm, collisions between agents and between the agent and obstacles are avoided by virtual attractive/repulsive potential, velocity matching between agents is achieved by the velocity consensus term, and goal following is realized by virtual leaders. Meanwhile, flocking control has been extensively investigated by numerous researchers [11,12,13]. From the perspective of virtual leaders, current research on flocking control can be divided into each agent with virtual leader information [14,15,16], some agents with virtual leader information [17,18], and multi-agents with multiple virtual leaders’ information [19]. Overall, the aforementioned studies have contributed to the advancement of multi-agent flocking control based on virtual leaders. However, it is noted in [20,21,22] that some agents fail to follow the virtual leader in multiple obstacle environments. The reason is that the virtual forces of these agents have reached equilibrium with near-zero acceleration and velocity, which is referred to as dynamic equilibrium. Therefore, the virtual force problem of multi-agents in multiple obstacle environments is critical in flocking control research [23].
At present, the methods to solve the virtual force problem mainly include the geometric filling [24,25] and intelligent decision-making methods [26,27,28]. In the geometric filling method, it is generally assumed that the agent has the ability to perceive global obstacle information. Furthermore, to change the virtual force of the agent under multiple obstacles such that it can follow the goal, the non-convex obstacles are transformed into convex ones via geometric filling rules, followed by the generation of virtual agents on their surface. In the intelligent decision-making method, the agent is first trained in multiple obstacle environments to form obstacle avoidance movement rules. Then, the obstacle information perceived by the agent is utilized to match with the training model, producing the virtual force for obstacle avoidance and goal following. On the whole, the geometric filling and intelligent decision-making methods enable multi-agents to escape from the dynamic equilibrium. The geometric filling method requires the obtaining of global obstacle information, which limits its practical application. The intelligent decision-making method involves a complex training structure, and the real-time performance requires further improvement. In addition, these methods suffer from the multi-attribute decision-making problem.
Fuzzy control is an effective method for solving the multi-attribute decision-making problem [29,30]. In [31], the attractive/repulsive function is designed by the fuzzy control method, thus proposing a multi-agent flocking control algorithm with obstacle avoidance. The computational complexity of this algorithm is low. However, it is possible for the agent to be trapped in dynamic equilibrium when perceiving multiple obstacles. Motivated by the above studies, this paper attempts to calculate the fuzzy expected velocity by the established membership functions and fuzzy rules, which enable the agent to escape from the dynamic equilibrium. The contributions of this paper are summarized as follows:
(1)
A dynamic equilibrium judgment rule is proposed to determine whether the agent is trapped in dynamic equilibrium. If the agent is trapped in dynamic equilibrium, the fuzzy expected velocity is calculated by the established membership functions and fuzzy rules. In contrast, if the agent is not trapped, the projected expected velocity is obtained by the obstacle projection method.
(2)
The sensing radius region of the agent is divided into four subregions and a nonnegative subsection function is designed to adjust the attractive/repulsive potentials in these subregions.
(3)
A fuzzy flocking control (FFC) algorithm is developed for multi-agents trapped in dynamic equilibrium under multiple obstacles. In the algorithm, the interaction term between agents is reconstructed using the fuzzy expected velocity, projected expected velocity, and nonnegative subsection function.
The remainder of this paper is organized as follows. Some preliminary knowledge is introduced in Section 2. In Section 3, the FFC algorithm is proposed. In Section 4, the properties analysis of the FFC algorithm is presented. Numerical experiments and the discussion are reported in Section 5. The conclusions are given in Section 6.

2. Preliminaries

2.1. Agent-Based Representation

Consider a group of N  α -agents moving in the n-dimensional Euclidean space, n = 2 , 3 . Denote q i , p i , u i R n × 1 as the position, velocity, and control input (or acceleration) acting on α -agent i, i = 1 , 2 , , N at the current time. Then, the second-order dynamic model of α -agent i can be formulated as:
q ˙ i = δ p i + ( 1 δ ) p i , m p i p i p ˙ i = u i
where p i , m represents the upper velocity limit of α -agent i, δ = 1 if p i p i , m , otherwise δ = 0 , q ˙ i denotes the first order derivative of the vector q i , and  q i represents the Euclidean norm of the vector q i . Furthermore, a virtual γ -agent (e.g., multi-drone precision firefighting and multi-robot rescue) is introduced to achieve group goal following. Denote q γ , p γ , u γ R n × 1 as the position, velocity, and control input acting on the γ -agent at the current time. Then, the second-order dynamic model of the γ -agent can be described as:
q ˙ γ = p γ p ˙ γ = u γ

2.2. Proximity Net of α -Agents and β -Agents

The flocking composed of N  α -agents can be described by an undirected graph G α ( q ) = ( V α , E α ( q ) ) , where V α = { 1 , 2 , , N } is the set of α -agent nodes, E α ( q ) = { ( i , j ) i , j V α , j i } is the set of edges, and  q is the configuration of all α -agents. To avoid collisions between α -agents and obstacles, a number of virtual β -agents are generated at the boundary points of obstacles. All β -agents can be described by a directed bipartite graph G β ( q ) = ( V β , E β ( q ) ) , where V β = { v 1 , v 2 , , v l } is the set of β -agent nodes and E α ( q ) = { ( i , k ) i V α , k V β } is the set of edges. Then, the proximity net of α -agents and β -agents can be described by
G ( q ) = G α ( q ) + G β ( q )
Let r be the sensing radius, N i α be the set of neighboring α -agents of α -agent i, N i β be the set of neighboring β -agents of α -agent i, and q ˘ i , k be the position of β -agent k, then
N i α = { j V α q j q i < r , i V α , j i }
N i β = { k V β q ˘ i , k q i < r , i V α }

2.3. Collective Potential Functions

The lattice-type structure can be used to capture the geometry between α -agents and between the α -agent and β -agent. In this structure, the distances between the α -agent and its neighboring α -agents are equal, as are the distances to its neighboring β -agents. Thus, the positions of α -agents and β -agents are represented by the following constraints:
q j q i σ = d σ j N i α
q ˘ i , k q i σ = d σ k N i β
where · σ = 1 ϵ 1 [ 1 + ϵ 1 · 2 1 ] , ϵ 1 > 0 , q j q i = d , q ˘ i , k q i = d , d σ = d σ , d σ = d σ , d r , and d r . To obtain the solutions of these constraints, it is necessary to construct two smooth collective potential functions Φ α ( q ) and Φ β ( q ) such that their local minima correspond to the solutions of the constraints (6) and (7), respectively. The detailed construction procedure is described below:
First, the bump function ρ h ( z ) is introduced as
ρ h ( z ) = 1 , z [ 0 , h ) 1 2 1 + cos π ( z h ) ( 1 h ) , z [ h , 1 ] 0 , otherwise
where h ( 0 ,   1 ) . Second, let ψ α ( z ) be the pairwise attractive/repulsive potential between α -agents and ψ β ( z ) be the pairwise attractive/repulsive potential between the α -agent and β -agent, then
ψ α ( z ) = d σ z ϕ α ( s ) d s
ψ β ( z ) = d σ z ϕ β ( s ) d s
where ϕ α ( z ) = ρ h ( z / r σ ) ϕ ( z d σ ) , r σ = r σ , ϕ β ( z ) = ρ h ( z / d σ ) ( ς 1 ( z d σ ) 1 ) , ϕ ( z ) = 1 2 [ ( a + b ) ς 1 ( z + c ) + ( a b ) ] , and  ς 1 ( z ) = z / 1 + z 2 . It is noted that ψ α ( z ) has a finite cut-off at r σ since ϕ α ( z ) = 0 for all z r σ . In addition, it is essential to make 0 < a b and c = | a b | / 4 a b to guarantee that ψ α ( z ) and ψ β ( z ) have a minimum at z = d σ . Finally, the collective potential functions are defined by
Φ α ( q ) = i V α j N i α ψ α ( q j q i σ )
Φ β ( q ) = i V α k N i β ψ β ( q ˘ i , k q i σ )
It can be observed that the collective potential functions Φ α ( q ) and Φ β ( q ) can be minimized at (6) and (7), respectively.

2.4. A Previous Algorithm for Evaluating u i

In the Olfati-Saber flocking (OSF) algorithm [10], the interaction framework between α -agents, the γ -agent, and obstacles is shown in Figure 1. In addition, the control input u i of α -agent i is evaluated by
u i = u i β + u i α + u i γ
where u i β is the ( α , β ) interaction term, u i α is the ( α , α ) interaction term, and  u i γ is the ( α , γ ) interaction term. These interaction terms can be independently evaluated as follows:

2.4.1. Evaluate u i β

When α -agent i perceives obstacle O k within its sensing radius r, a virtual β -agent k is generated at the projection point of α -agent i on the boundary of obstacle O k . Then, the position and velocity of β -agent k are determined using the obstacle projection method. In this method, the position and velocity of β -agent k is the projection of α -agent i in the tangential direction of the obstacle surface, as illustrated in Figure 2. Let p ˘ i , k R n × 1 be the velocity of β -agent k. For the infinite wall obstacle, the following formulas hold:
q ˘ i , k = P q i + ( I P ) y k p ˘ i , k = P p i
where P = I a k a k T is the projection matrix, a k is the unit normal vector, y k is the passing point of the infinite wall boundary, and  I is the identity matrix. For the spherical obstacle, the following formulas are defined:
q ˘ i , k = μ q i + ( 1 μ ) y k p ˘ i , k = μ P p i
where y k is the center of the spherical obstacle, μ = r k / q i y k , and  r k is the radius of the spherical obstacle. With the position and velocity of β -agent k, the  ( α , β ) interaction term is constructed as
u i β = c 1 β k N i β q i ψ β ( q ˘ i , k q i σ ) + c 2 β k N i β b i , k ( q ) ( p ˘ i , k p i )
where c 1 β and c 2 β are the feedback gains, q i ψ β ( q ˘ i , k q i σ ) is the gradient of the pairwise attractive/repulsive potential ψ β ( q ˘ i , k q i σ ) between α -agent i and β -agent k at position q i , b i , k ( q ) is the heterogeneous adjacency between α -agent i and β -agent k, and 
b i , k ( q ) = ρ h ( q ˘ i , k q i σ / d σ )

2.4.2. Evaluate u i α

The ( α , α ) interaction term is constructed as
u i α = c 1 α j N i α q i ψ α ( q j q i σ ) + c 2 α j N i α a i j ( q ) ( p j p i )
where c 1 α and c 2 α are the feedback gains, q i ψ α ( q j q i σ ) is the gradient of the pairwise attractive/repulsive potential ψ α ( q j q i σ ) between α -agents i and j at position q i , a i j ( q ) is the homogeneous adjacency between α -agents i and j, and 
a i j ( q ) = ρ h ( q j q i σ / r σ ) , j i 0 , j = i
Here, the first term in Formulas (16) and (18) is referred to as the gradient-based term, and the second term in Formulas (16) and (18) is referred to as the consensus term.

2.4.3. Evaluate u i γ

The ( α , γ ) interaction term is constructed as
u i γ = c 1 γ ς 1 ( q i q γ ) c 2 γ ( p i p γ )
where c 1 γ and c 2 γ are the feedback gains.
Definition 1.
Define α-agent i being trapped in dynamic equilibrium when it perceives obstacles preventing it from following the γ-agent, and both its acceleration and velocity are near-zero for a period of time. The scenario of α-agent i being trapped in dynamic equilibrium is shown in Figure 3. It is observed that the virtual forces acting on α-agent i reach equilibrium, affected by the γ-agent and the obstacles k 1 and k 2 .
Problem 1.
In this paper, each α-agent needs to avoid colliding with the β-agents and other α-agents while following the γ-agent. Therefore, each α-agent needs to evaluate the interaction terms u i β (16), u i α (18), and  u i γ (20). Affected by these interaction terms, α-agent i is considered in dynamic equilibrium when it has little or no displacement over time, i.e.,
q i ( t m Δ t ) q i ( t ) λ
where Δ t is the step time, m is the number of steps, and λ is the small constant. In dynamic equilibrium, α-agent i remains at static or in oscillatory motion; hence, it cannot bypass the obstacles and follow the group goal. The problem of multi-agents being trapped in dynamic equilibrium is difficult to solve due to its complexity.

3. Proposed Algorithm

To solve Problem 1, a fuzzy flocking control (FFC) algorithm is proposed. In this algorithm, the  ( α , β ) interaction term u i β , ( α , α ) interaction term u i α , and  ( α , γ ) interaction term u i γ are reconstructed such that the α -agent can escape from the dynamic equilibrium, bypass obstacles, and avoid collisions. The structure of this algorithm is shown in Figure 4, and the details are described below:

3.1. Evaluate u i β

The ( α , β ) interaction term u i β is reconstructed by calculating the fuzzy expected velocity of α -agent for β -agent and designing a nonnegative subsection function. When α -agent i is trapped in dynamic equilibrium, the fuzzy expected velocity of α -agent for β -agent is obtained by the fuzzy control method, and  α -agent i can escape from the dynamic equilibrium based on this fuzzy expected velocity. When α -agent i is not trapped in dynamic equilibrium, the projected expected velocity of α -agent for β -agent is obtained by Formula (14) or (15), which enables α -agent i to bypass the obstacles. Meanwhile, the sensing radius region of α -agent i is divided into four subregions, followed by the design of a nonnegative subsection function to adjust the attractive/repulsive potentials in these subregions, which reduces the possibility of collisions between the α -agent and obstacles.

3.1.1. Fuzzy Expected Velocity Calculated by Fuzzy Control

When α -agent i perceives the obstacles that prevent it from following the γ -agent, determine whether the position of α -agent i satisfies Condition (21). If Condition (21) holds, α -agent i is trapped in dynamic equilibrium. Then, the fuzzy expected velocity p ˜ i , k of α -agent i for β -agent k is calculated by the fuzzy control method. The core idea of this method is to encode experience-based knowledge in the form of fuzzy rules. In this method, the crisp inputs with precise values are first converted to fuzzy variable sets, which is referred to as fuzzification. Then, these fuzzy variable sets are transformed into new fuzzy variable sets by performing fuzzy inference based on the fuzzy rules. Finally, the new fuzzy variable sets are converted back to the crisp outputs, which is also referred to as defuzzification. The details of this method are described as follows:
Crisp inputs: The fuzzy expected velocity p ˜ i , k of α -agent i for β -agent k is equal to the distance between the current position q i of α -agent i and the expected position q ˜ i of α -agent i at the next time divided by the step time Δ t . Thus, determining the expected position q ˜ i of α -agent i is crucial to obtain the fuzzy expected velocity p ˜ i , k . It is known that the expected position q ˜ i of α -agent i can be calculated by phase and distance. In this regard, the crisp inputs to the fuzzy control are defined as the phase and distance. To deal with phases simply, the reference coordinate system is rotated to the agent coordinate system when α -agent i perceives the obstacles. In the agent coordinate system, the origin is the position of α -agent i, and the direction of the x-axis points to the γ -agent. Next, the relative phase θ i , k between α -agent i and obstacle boundary point k can be obtained in the agent coordinate system, and  θ i , k = θ i , k 2 π if θ i , k > π . Denote ϑ i = { θ i , k , k N i β } as the relative phase set of α -agent i, m p and m n as the number of positive and negative elements in the relative phase set ϑ i , and  k p and k n as the obstacle boundary points corresponding to the maximum and minimum values of the phases in the phase set ϑ i . Then, the input relative phase θ i , i n = θ i , k n and the input distance d i , i n = q ˘ i , k n q i if m p > m n , otherwise θ i , i n = θ i , k p and d i , i n = q ˘ i , k p q i . Here, the input relative phase θ i , i n and the input distance d i , i n are the crisp inputs to the fuzzy control.
Fuzzification: Let ϑ i n denote the fuzzy variable set of the input relative phase θ i , i n , ϑ i n = { NB , N , Z , P , PB } , where NB , N , Z , P , and  PB are the abbreviations of negativebig, negative, zero, positive, and positivebig. Meanwhile, let d i n denote the fuzzy variable set of the input distance d i , i n , d i n = { VS , S , M , L , VL } , where VS , S , M , L , and  VL are the abbreviations of veryshort, short, medium, long and verylong. Then, the triangular membership functions η A i ( θ i , i n ) and η B i ( d i , i n ) , which are shown in Figure 5a,b, are used to obtain the degrees corresponding to the linguistic variables in fuzzy variable sets ϑ i n and d i n , respectively. Thus, the input relative phase θ i , i n and the input distance d i , i n can be converted to the input fuzzy variable sets A i and B i , which are given by
A i = { ( θ i , i n , η A i ( θ i , i n ) ) θ i , i n [ π , π ] }
B i = { ( d i , i n , η B i ( d i , i n ) ) d i , i n [ 0 , r ] }
Fuzzy inference: With the input fuzzy variable sets A i and B i , it is possible to perform fuzzy inference. The fuzzy inference procedure utilizes fuzzy rules and the Mamdani algorithm [32] to create a mapping between the input fuzzy variable sets and the output fuzzy variable sets. Based on experience and intuition, the rule base is defined in Table 1, where ϑ o u t denotes the fuzzy variable set of the output relative phase and d o u t represents the fuzzy variable set of the output relative distance. It can be seen from Table 1 that the first fuzzy rule is
R 1 : IF θ i n 1 is NB AND d i n 1 is VS THEN θ o u t 1 is P AND d o u t 1 is S
where θ i n 1 is a fuzzy variable of the input relative phase, d i n 1 is a fuzzy variable of the input distance, θ o u t 1 is a fuzzy variable of the output relative phase, and  d i n 1 is a fuzzy variable of the output distance. Furthermore, the Mamdani algorithm consists of two steps: (i) assign the degree of the lth fuzzy rule, calculated by
deg ( R l ) = η A i ( θ i n l ) η B i ( d i n l ) η A i ( θ o u t l ) η B i ( d o u t l )
where l = 1 , 2 , , N r , N r is the total number of fuzzy rules in the rule base, and ∧ denotes the “min” operation, and (ii) let η o l be the degree of the lth fuzzy rule in the output fuzzy subset for the input { θ i n l , d i n l } , given by
η o l = η A i ( θ i n l ) η B i ( d i n l )
then the output fuzzy variable sets { D i , F i } are the synthesis of η o l and deg ( R l ) , obtained by
{ D i , F i } = [ η o l deg ( R l ) ]
where ∨ denotes the “max” operation.
Defuzzification: Let θ i , o u t be the output relative phase, and  d i , o u t be the output distance. Then, the output fuzzy variable sets can be converted into a series of subregions using the membership functions η A i ( θ i , o u t ) and η B i ( d i , o u t ) . In addition, let m A i be the number of subregions obtained by the membership functions η A i ( θ i , o u t ) , and  θ i , o u t g be the phase in the center of subregion g. Similarly, let m B i be the number of subregions obtained by the membership functions η B i ( d i , o u t ) , and  d i , o u t g be the distance in the center of subregion g. Then, the output relative phase θ i , o u t and the output distance d i , o u t are obtained by calculating the center of gravity as follows:
θ i , o u t = g = 1 m A i θ i , o u t g · η A i ( θ i , o u t g ) g = 1 m A i ( η A i ( θ i , o u t g ) )
d i , o u t = g = 1 m B i d i , o u t g · η B i ( d i , o u t g ) g = 1 m B i ( η B i ( d i , o u t g ) )
Crisp outputs: The expected position q ˜ i of α -agent i is calculated by
q ˜ i = q i + d i , o u t ξ ( θ i , o u t )
where
θ i , o u t = [ θ i , a z , o u t , θ i , e l , o u t ]
ξ ( θ i , o u t ) = ξ 1 ( θ i , e l , o u t ) ξ 2 ( θ i , a z , o u t )
ξ 1 ( θ i , e l , o u t ) = [ cos θ i , e l , o u t , cos θ i , e l , o u t , 1 ]
ξ 2 ( θ i , a z , o u t ) = [ sin θ i , a z , o u t , cos θ i , a z , o u t , sin θ i , a z , o u t ]
and θ i , a z , o u t is the relative azimuth, θ i , e l , o u t is the relative elevation, and ⊙ is the Hadamard product. Note that θ i , e l , o u t = 0 if α -agent i is in two-dimensional Euclidean space. Thus, the fuzzy expected velocity p ˜ i , k of α -agent i for β -agent k is calculated by
p ˜ i , k = q ˜ i q i Δ t
Substituting Formula (29) into Formula (34), we have
p ˜ i , k = d i , o u t ξ ( θ i , o u t ) Δ t

3.1.2. Nonnegative Subsection Function

When the distance between α -agent i and β -agent k is less than d , the repulsion enables their separation. However, the continuous reduction in distance between α -agent i and β -agent k presents a collision risk. Early warning can be used to reduce the collision risk [33]. Hence, we define d c as the minimum early-warning distance, and the collision region as the area within a distance d c from α -agent i. Furthermore, maintaining a strict distance d between α -agent i and β -agent k is difficult in practice. To solve this problem, we define ϵ 2 as the disturbance factor, and the alignment region as the area whose distance from α -agent i is greater than d ϵ 2 and less than d + ϵ 2 . Thus, the sensing radius region of α -agent i is divided into four subregions, as shown in Figure 6. It is known that the attractive/repulsive potential can achieve separation, alignment, and attraction between α -agent i and β -agent k [10]. Here, a nonnegative subsection function κ ( d i , k , d ) is designed to adjust the attractive/repulsive potential in these four subregions, given by
κ ( d i , k , d ) = τ 1 , d i , k ( 0 , d c ) , τ 2 , d i , k [ d c , d ϵ 2 ] , τ 3 , d i , k ( d ϵ 2 , d + ϵ 2 ] , τ 4 , d i , k ( d + ϵ 2 , r ] ,
where τ 1 , τ 2 , τ 3 , and  τ 4 are the weights, d c = d safe + 2 p i , m Δ t , d safe is the minimum safe distance for avoiding collisions since the size of α -agent i cannot be ignored in practice, ϵ 2 is a constant, d i , k = q ˘ i , k q i , and d i , k ( d ϵ 2 , d + ϵ 2 ] is the desired distance for collision avoidance.
With the fuzzy expected velocity p ˜ i , k of α -agent i for β -agent k and the nonnegative subsection function κ ( d i , k , d ) , the  ( α , β ) interaction term u i β is reconstructed as
u i β = c 1 β k N i β κ ( d i , k , d ) q i ψ β ( q ˘ i , k q i σ ) + ( 1 h i ) c 2 β k N i β b i , k ( q ) ( p ˘ i , k p i ) + h i c 3 β k N i β b i , k ( q ) ( p ˜ i , k p i )
where c 3 β is the feedback gain, d i , k is the distance between α -agent i and β -agent k, d i , k = q ˘ i , k q i , q ˘ i , k is the projected expected position of α -agent i for β -agent k, p ˘ i , k is the projected expected velocity of α -agent i for β -agent k, and  h i = 1 if α -agent i is trapped in dynamic equilibrium, otherwise h i = 0 . Here, the projected expected position q ˘ i , k and projected expected velocity p ˘ i , k are calculated by Formulas (14) and (15).

3.2. Evaluate u i α

With the nonnegative subsection function κ ( d i j , d ) , the  ( α , α ) interaction term u i α is reconstructed as
u i α = c 1 α j N i α κ ( d i j , d ) q i ψ α ( q j q i σ ) + c 2 α j N i α a i j ( q ) ( p j p i )
where d i j is the distance between α -agents i and j, d i j = q j q i .

3.3. Evaluate u i γ

To increase the attractiveness of the γ -agent to α -agent i, the  ( α , γ ) interaction term u i γ is reconstructed as
u i γ = c 1 γ ( q i q γ ) c 2 γ ( p i p γ )
Using Formulas (37)–(39) and (13), the control input u i for α -agent i can be obtained. In summary, the details of the FFC algorithm are described in Algorithm 1.
Algorithm 1 FFC algorithm
  • Input: Runtime T m , r, d, d , ϵ 1 , ϵ 2 , h, a, b, c 1 α , c 2 α , c 1 β , c 2 β , c 3 β , c 1 γ , c 2 γ , m, Δ t , λ , p i , m , d safe , τ 1 , τ 2 , τ 3 , τ 4 .
  • Initialization:  q γ , p γ , q i , p i , t = 0 .
  • Iteration:
    1:
    If q ˘ i , k q i < r and (21) is true, h i = 1 and calculate p ˜ i , k by fuzzy control:
    (1)
    Obtain θ i , i n and d i , i n in the agent coordinate system.
    (2)
    Calculate A i using (22) and B i using (23).
    (3)
    Calculate { D i , F i } using (26).
    (4)
    Calculate θ i , o u t using (27) and d i , o u t using (28).
    (5)
    Calculate p ˜ i , k using (34).
    Otherwise, h i = 0 and calculate p ˘ i , k using (14) or (15).
    2:
    Calculate κ ( d i j , d ) and κ ( d i , k , d ) using (36).
    3:
    Evaluate u i β using (37).
    4:
    Evaluate u i α using (38).
    5:
    Evaluate u i γ using (39).
    6:
    Evaluate u i using (13), update q i using (1), and output q i and p i .
    7:
    If t > T m , terminate the iteration. Otherwise, set t = t + Δ t and return to step 1.
  • Output:  q i , p i .

4. Properties Analysis of FFC Algorithm

Denote Φ α ( t ) as the total potential energy between the α -agents, Φ β ( t ) as the total potential energy between the α -agent and β -agent, Φ p γ ( t ) as the total potential energy between the α -agent and γ -agent, and Φ k γ ( t ) as the kinetic energy between the α -agent and γ -agent. Then, the total energy E ( t ) can be defined by
E ( t ) = Φ α ( t ) + Φ β ( t ) + Φ p γ ( t ) + Φ k γ ( t )
where
Φ α ( t ) = c 1 α 2 i = 1 N j N i α κ ( d i j , d ) ψ α ( q j q i σ )
Φ β ( t ) = c 1 β i = 1 N k N i β κ ( d i , k , d ) ψ β ( q ˘ i , k q i σ )
Φ p γ ( t ) = c 1 γ 2 i = 1 N ( q i q γ ) T ( q i q γ )
Φ k γ ( t ) = 1 2 i = 1 N ( p i p γ ) T ( p i p γ )
Theorem 1.
Consider a group of N α-agents with its motion model (1). The control inputs acting on these α-agents are obtained by Formulas (13) and (37)–(39). Assume that the γ-agent is static, i.e., p γ = 0 . Furthermore, assume that the initial total energy E ( 0 ) is finite. Then, q i q γ 2 E ( 0 ) / c 1 γ , i = 1 , 2 , , N holds for all t 0 .
Proof. 
Calculating the first order derivative of the total energy E ( t ) , we can obtain
d E ( t ) d t = d Φ α ( t ) d t + d Φ β ( t ) d t + d Φ p γ ( t ) d t + d Φ k γ ( t ) d t
where
d Φ α ( t ) d t = c 1 α i = 1 N j N i α κ ( d i j , d ) ( p i ) T q i ψ α ( q j q i σ )
d Φ β ( t ) d t = c 1 β i = 1 N k N i β κ ( d i , k , d ) ( p i ) T q i ψ β ( q ˘ i , k q i σ )
d Φ p γ ( t ) d t = c 1 γ i = 1 N ( p i ) T ( q i q γ )
d Φ k γ ( t ) d t = i = 1 N ( p i ) T u i
Formulas (13), (37)–(39) and (45) can be rewritten as
d E ( t ) d t = c 2 α i = 1 N j N i α a i j ( q ) ( p i ) T ( p i p j ) c 2 β i = 1 N k N i β ( 1 h i ) b i , k ( q ) ( p i ) T ( p i p ˘ i , k ) c 3 β i = 1 N k N i β h i b i , k ( q ) ( p i ) T ( p i p ˜ i , k ) c 2 γ i = 1 N ( p i ) T p i
Let v 1 = [ p 1 T p N T ] T , v 2 = [ p ˘ 1 , k T p ˘ N , k T ] T , v 3 = [ p ˜ 1 , k T p ˜ N , k T ] T , and v = [ v 1 , v 2 , v 3 ] . In addition, let F A be a diagonal matrix whose ith diagonal element is given by f a ( i , i ) = j N i α a i j ( q ) , A be a adjacency matrix whose ( i , j ) th element is a i j ( q ) , L A be a Laplacian matrix given by L A = F A A , F B be a diagonal matrix whose ith diagonal element is given by f b ( i , i ) = k N i β ( 1 h i ) b i , k ( q ) , B be a adjacency matrix whose ( i , j ) th element is ( 1 h i ) b i , k ( q ) , F C be a diagonal matrix whose ith diagonal element is given by f c ( i , i ) = k N i β h i b i , k ( q ) , and C be a adjacency matrix whose ( i , j ) th element is h i b i , k ( q ) . This introduces a matrix as follows
R = c 2 α L A + c 2 β F B + c 3 β F C + c 2 γ I N F 0 F 0 c 2 β B F 0 F 0 c 3 β C F 0 F 0
Formula (50) can be equivalently reformulated as
d E ( t ) d t = ( v ) T ( R I n ) v
where I N R N × N and I n R n × n are the identity matrices, F 0 R N × N is the zero matrix, and ⊗ is the Kronecker product.
Since c 2 α > 0 , c 2 β > 0 , c 3 β > 0 , c 2 γ > 0 , and L A , F B , F C are positive semidefinite matrices, then R is a positive semidefinite matrix. Thus, we have ( v ) T ( R I n ) v 0 and d E ( t ) d t 0 . It can be seen that the total energy E ( t ) is a nonincreasing function, i.e., E ( t ) E ( 0 ) for all t 0 . According to Formula (40), we have Φ p γ ( t ) E ( 0 ) . Therefore, q i q γ 2 E ( 0 ) / c 1 γ , i = 1 , 2 , N holds for all t 0 .
Theorem 2.
Consider a group of N α-agents satisfying Theorem 1. Then, there is no collision between the α-agent and objects (e.g., other α-agents or obstacles).
Proof. 
Assume that there are e ¯ 1 collisions between α -agents and e ¯ 2 collisions between the α -agent and obstacles occurring at time t, e ¯ 1 + e ¯ 2 = e ¯ . Then, we have
E ( t ) e ¯ 1 ψ α ( 0 ) + e ¯ 2 ψ β ( 0 ) ( e ¯ ) min { ψ α ( 0 ) , ψ β ( 0 ) }
It is known from Formulas (9) and (10) that the pairwise attractive/repulsive potential ψ α ( 0 ) and ψ β ( 0 ) tend to infinity. That is, E ( t ) E ( 0 ) if e ¯ 0 . This contradicts the proof that E ( t ) is a nonincreasing function. Hence, e ¯ = 0 , i.e., there is no collision between the α -agent and objects. □

5. Experimental Results and Discussion

In this section, three experiments are undertaken to demonstrate the effectiveness of the proposed FFC algorithm, including the spherical or wall obstacle scenario, spherical and wall obstacle combination scenario, and complex obstacle scenario experiments. All experiments are carried out via Matlab R2022b on the DELL PC, which has a 3.2 GHz i9-14900K CPU and 32 GB of memory. In addition, the OSF algorithm [10] is selected for comparison with the FFC algorithm. Meanwhile, denote d α γ as the distance between the center of the α -agents and the γ -agent, d min as the minimum distance between the α -agent and objects, and t re as the time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent. The distances d α γ and d min , and the time t re , are used to evaluate the performance of the OSF and FFC algorithms.
The parameters of the OSF and FFC algorithms are shown in Table 2. Specifically, the feedback gains satisfy c 1 α > 0 , c 2 α > 0 , c 1 β > 0 , c 2 β > 0 , c 3 β > 0 , c 1 γ > 0 , and c 2 γ > 0 . If the values of these feedback gains are identical, it can better capture the combined effects of the interaction terms ( α , β ) , ( α , α ) , and ( α , γ ) . Moreover, if the values of these feedback gains are high, the velocity of the α -agent may tend to saturation. The weights τ 2 , τ 3 , and τ 4 satisfy τ 2 τ 3 τ 4 0 . In addition, the weight τ 1 is crucial for early warning and should satisfy τ 1 τ 2 .

5.1. Spherical or Wall Obstacle Scenario Experiment

In this experiment, two obstacle scenarios are selected as follows:
S A 1 :
There is a spherical obstacle of radius 15 at coordinates [ 55 ,   45 ] .
S A 2 :
There is a wall obstacle of length 30 at coordinates [ 45 ,   55 ] .
The number of α -agents is N = 20 , the initial positions of these α -agents are selected randomly from [ 5 ,   20 ] 2 , and the initial velocities of these α -agents are selected randomly from [ 0 ,   2 ] 2 . In addition, the initial position of the virtual γ -agent is [ 65 ,   80 ] , and the initial velocity of the virtual γ -agent is zero.
The visual comparison of the two algorithms is shown in Figure 7. It can be seen that the two algorithms can enable all α -agents to bypass spherical or wall obstacles and follow the γ -agent. Furthermore, the quantitative comparison of the two algorithms over 100 runs is shown in Figure 8 and Figure 9. It can be seen from Figure 8 that the time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent is reduced compared to the OSF algorithm, which indicates that the FFC algorithm enables α -agents to bypass the obstacle faster. Meanwhile, it can be seen from Figure 9 that the minimum distance of the two algorithms between the α -agent and objects is consistently greater than the minimum safe distance, i.e., d min > d safe , which shows that there are no collisions between the α -agent and objects [34].

5.2. Spherical and Wall Obstacle Combination Scenario Experiment

In this experiment, four obstacle scenarios are selected as follows:
S B 1 :
There are three spherical obstacles of radius 15 at coordinates [ 30 ,   70 ] , [ 55 ,   53 ] , and [ 80 ,   36 ] .
S B 2 :
There are six wall obstacles of length 20 at coordinates [ 25 ,   85 ] , [ 25 ,   65 ] , [ 45 ,   65 ] , [ 45 ,   45 ] , [ 65 ,   45 ] , and [ 65 ,   25 ] .
S B 3 :
There are two spherical obstacles of radius 15 at coordinates [ 30 ,   70 ] and [ 76 ,   26 ] . Moreover, there are two wall obstacles of length 24 at coordinates [ 39 ,   58 ] and [ 39 ,   34 ] .
S B 4 :
There is a spherical obstacle of radius 15 at coordinates [ 42 ,   45 ] . In addition, there are four wall obstacles of length 20 at coordinates [ 38 ,   80 ] , [ 18 ,   60 ] , [ 46 ,   30 ] , and [ 66 ,   30 ] .
The number of α -agents is N = 20 , the initial positions of these α -agents are selected randomly from [ 5 ,   20 ] 2 , and the initial velocities of these α -agents are selected randomly from [ 0 ,   2 ] 2 . In addition, the initial position of the virtual γ -agent is [ 60 ,   85 ] , and the initial velocity of the virtual γ -agent is zero.
Both visual and quantitative comparisons for the two algorithms are shown in Figure 10, Figure 11 and Figure 12. It can be seen from Figure 10 that almost all α -agents of the OSF algorithm remain stationary or oscillate near the combined position of multiple obstacles. Furthermore, it can be observed from Figure 11 that α -agents of the OSF algorithm cannot re-aggregate near the γ -agent. The reason is that α -agents are trapped in dynamic equilibrium, and it is difficult to bypass the obstacles. However, the FFC algorithm enables all α -agents to bypass the obstacles and follow the γ -agent. This indicates that the FFC algorithm can enable α -agents to escape from the dynamic equilibrium due to the alteration of the virtual force equilibrium. It can be noted from Figure 12 that the minimum distance of the FFC algorithm between the α -agent and objects decreases near the combined position of multiple obstacles. Nevertheless, the minimum distance d min remains greater than the minimum safe distance d safe .

5.3. Complex Obstacle Scenario Experiment

In this experiment, a complex obstacle scenario is selected as follows:
S C :
There are four spherical obstacles of radius 15 at coordinates [ 37 ,   80 ] , [ 58 ,   57 ] , [ 110 ,   56 ] , and [ 130 ,   80 ] , and two spherical obstacles of radius 10 at coordinates [ 60 ,   140 ] and [ 83 ,   108 ] . Additionally, there are seven wall obstacles of length 10 at coordinates [ 25 ,   40 ] , [ 45 ,   30 ] , [ 65 ,   20 ] , [ 85 ,   10 ] , [ 135 ,   5 ] , [ 135 ,   145 ] , and [ 15 ,   145 ] , seven wall obstacles of length 20 at coordinates [ 25 ,   30 ] , [ 45 ,   20 ] , [ 65 ,   10 ] , [ 85 ,   0 ] , [ 145 ,   25 ] , [ 145 ,   145 ] , and [ 5 ,   145 ] , two wall obstacles of length 13 2 at coordinates [ 45 ,   117 ] and [ 45 ,   93 ] , and four wall obstacles of length 15 2 at coordinates [ 84 ,   61 ] , [ 84 ,   34 ] , [ 99 ,   46 ] , and [ 69 ,   46 ] .
The number of α -agents is N { 10 ,   20 ,   40 ,   70 } , the initial positions of these α -agents are selected randomly from [ 6 ,   30 ] × [ 1 ,   25 ] , and the initial velocities of these α -agents are selected randomly from [ 0 , 2 ] 2 . In addition, the initial position of the virtual γ -agent is [ 110 ,   130 ] , and the initial velocity of the virtual γ -agent is zero.
The visual results of the FFC algorithm in the complex obstacle scenario are shown in Figure 13. It can be seen that all α -agents of the FFC algorithm can bypass the complex obstacles and follow the γ -agent. Furthermore, the quantitative results of the FFC algorithm over 100 runs are shown in Figure 14. It can be observed that as the number of α -agents increases, all α -agents of the FFC algorithm can still follow the γ -agent and do not collide with the objects. In summary, the FFC algorithm is suitable for obstacle avoidance in complex obstacle environments.

6. Conclusions

In this paper, we proposed an FFC algorithm to solve the problem of multi-agents trapped in dynamic equilibrium under multiple obstacles. The proposed FFC algorithm calculates the fuzzy expected velocity by the established membership function and fuzzy rules when the agent is trapped in dynamic equilibrium. Furthermore, the FFC algorithm divides the sensing radius region of the agent into four subregions and designs a nonnegative subsection function to adjust the attractive/repulsive potentials in these subregions. The properties analysis demonstrates that multi-agents can achieve collision-free goal-reaching motion when the sufficient conditions in Theorems 1 and 2 are satisfied. The experimental results verify that the proposed algorithm enables multi-agents to escape from dynamic equilibrium and is suitable for obstacle avoidance in complex obstacle environments. However, it is found that the time taken for all agents to bypass obstacles and re-aggregate near the goal position increases with the number of agents. To overcome this problem, we will focus on reducing the computational complexity and improving the scalability of the flocking control algorithm in the future. This includes optimizing the pairwise attractive/repulsive potential and defining multi-agent motion rules. In addition, we will focus on improving the robustness of the flocking control algorithm, such as minimizing sensor noise.

Author Contributions

Conceptualization, methodology, software, investigation, and writing—original draft preparation, W.L.; conceptualization, formal analysis, investigation, writing—review and editing, and funding acquisition, X.S.; writing—review and editing, supervision, project administration, and funding acquisition, Y.J.; validation, data curation, and visualization, X.L.; resources and writing—review and editing, J.W.; data curation and visualization, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supposed in part by the Foundation from the Guangxi Zhuang Autonomous Region under Grants AA23062038 and 2024GXNSFAA010270, and in part by the National Natural Science Foundation of China under Grants U23A20280 and 62161007.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, C.; Wang, C.; Xiang, X.; Low, K.H.; Wang, X.; Xu, X.; Shen, L. Collision-Avoiding Flocking with Multiple Fixed-Wing UAVs in Obstacle-Cluttered Environments: A Task-Specific Curriculum-Based MADRL Approach. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 10894–10908. [Google Scholar] [CrossRef]
  2. Tran, V.P.; Garratt, M.A.; Kasmarik, K.; Anavatti, S.G.; Leong, A.S.; Zamani, M. Multi-gas source localization and mapping by flocking robots. Inf. Fusion 2023, 91, 665–680. [Google Scholar] [CrossRef]
  3. Shi, L.; Cheng, Y.; Shao, J.; Sheng, H.; Liu, Q. Cucker-Smale flocking over cooperation-competition networks. Automatica 2022, 135, 109988. [Google Scholar] [CrossRef]
  4. Li, J.; Zhang, W.; Su, H.; Yang, Y. Flocking of partially-informed multi-agent systems avoiding obstacles with arbitrary shape. Auton. Agents Multi-Agent Syst. 2015, 29, 943–972. [Google Scholar] [CrossRef]
  5. Lyu, Y.; Hu, J.; Chen, B.M.; Zhao, C.; Pan, Q. Multivehicle Flocking with Collision Avoidance via Distributed Model Predictive Control. IEEE Trans. Cybern. 2021, 51, 2651–2662. [Google Scholar] [CrossRef] [PubMed]
  6. Pignotti, C.; Vallejo, I.R. Flocking estimates for the Cucker–Smale model with time lag and hierarchical leadership. J. Math. Anal. Appl. 2018, 464, 1313–1332. [Google Scholar] [CrossRef]
  7. Jiang, W.; Liu, K.; Charalambous, T. Multi-agent consensus with heterogeneous time-varying input and communication delays in digraphs. Automatica 2021, 135, 109950. [Google Scholar] [CrossRef]
  8. Zou, Y.; An, Q.; Miao, S.; Chen, S.; Wang, X.; Su, H. Flocking of uncertain nonlinear multi-agent systems via distributed adaptive event-triggered control. Neurocomputing 2021, 465, 503–513. [Google Scholar] [CrossRef]
  9. Yu, Z.; Jiang, H.; Hu, C. Second-order consensus for multiagent systems via intermittent sampled data control. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1986–2002. [Google Scholar] [CrossRef]
  10. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
  11. Sakai, D.; Fukushima, H.; Matsuno, F. Flocking for Multirobots Without Distinguishing Robots and Obstacles. IEEE Trans. Control Syst. Technol. 2017, 25, 1019–1027. [Google Scholar] [CrossRef]
  12. Jia, Y.; Wang, L. Leader–Follower Flocking of Multiple Robotic Fish. IEEE/ASME Trans. Mechatron. 2015, 20, 1372–1383. [Google Scholar] [CrossRef]
  13. Wu, J.; Yu, Y.; Ma, J.; Wu, J.; Han, G.; Shi, J.; Gao, L. Autonomous Cooperative Flocking for Heterogeneous Unmanned Aerial Vehicle Group. IEEE Trans. Veh. Technol. 2021, 70, 12477–12490. [Google Scholar] [CrossRef]
  14. Su, H.; Wang, X.; Lin, Z. Flocking of Multi-Agents with a Virtual Leader. IEEE Trans. Autom. Control 2009, 54, 293–307. [Google Scholar] [CrossRef]
  15. Zhou, Z.; Ouyang, C.; Hu, L.; Xie, Y.; Chen, Y.; Gan, Z. A framework for dynamical distributed flocking control in dense environments. Expert Syst. Appl. 2024, 241, 122694. [Google Scholar] [CrossRef]
  16. Yan, T.; Xu, X.; Li, Z.; Li, E. Flocking of multi-agent systems with unknown nonlinear dynamics and heterogeneous virtual leader. Int. J. Control Autom. Syst. 2021, 19, 2931–2939. [Google Scholar] [CrossRef]
  17. Sun, Z.; Xu, B. A flocking algorithm of multi-agent systems to optimize the configuration. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 7680–7684. [Google Scholar] [CrossRef]
  18. Zhao, W.; Chu, H.; Zhang, M.; Sun, T.; Guo, L. Flocking Control of Fixed-Wing UAVs with Cooperative Obstacle Avoidance Capability. IEEE Access 2019, 7, 17798–17808. [Google Scholar] [CrossRef]
  19. Qiu, Z.; Zhao, X.; Li, S.; Xie, Y.; Chen, C.; Gui, W. Finite-time formation of multiple agents based on multiple virtual leaders. Int. J. Syst. Sci. 2018, 49, 3448–3458. [Google Scholar] [CrossRef]
  20. Duhé, J.F.; Victor, S.; Melchior, P. Contributions on artificial potential field method for effective obstacle avoidance. Fract. Calc. Appl. Anal. 2021, 24, 421–446. [Google Scholar] [CrossRef]
  21. Azzabi, A.; Nouri, K. An advanced potential field method proposed for mobile robot path planning. Trans. Inst. Meas. Control 2019, 41, 3132–3144. [Google Scholar] [CrossRef]
  22. Semnani, S.H.; Liu, H.; Everett, M.; De Ruiter, A.; How, J.P. Multi-agent motion planning for dense and dynamic environments via deep reinforcement learning. IEEE Rob. Autom. Lett. 2020, 5, 3221–3226. [Google Scholar] [CrossRef]
  23. Beaver, L.E.; Malikopoulos, A.A. An overview on optimal flocking. Annu. Rev. Control 2021, 51, 88–99. [Google Scholar] [CrossRef]
  24. Szczepanski, R.; Bereit, A.; Tarczewski, T. Efficient local path planning algorithm using artificial potential field supported by augmented reality. Energies 2021, 14, 6642. [Google Scholar] [CrossRef]
  25. Song, J.; Hao, C.; Su, J. Path planning for unmanned surface vehicle based on predictive artificial potential field. Int. J. Adv. Rob. Syst. 2020, 17, 1729881420918461. [Google Scholar] [CrossRef]
  26. Bayat, F.; Najafinia, S.; Aliyari, M. Mobile robots path planning: Electrostatic potential field approach. Expert Syst. Appl. 2018, 100, 68–78. [Google Scholar] [CrossRef]
  27. He, Z.; Chu, X.; Liu, C.; Wu, W. A novel model predictive artificial potential field based ship motion planning method considering COLREGs for complex encounter scenarios. ISA Trans. 2023, 134, 58–73. [Google Scholar] [CrossRef]
  28. Szczepanski, R. Safe artificial potential field-novel local path planning algorithm maintaining safe distance from obstacles. IEEE Rob. Autom. Lett. 2023, 8, 4823–4830. [Google Scholar] [CrossRef]
  29. Mao, X.; Zhang, H.; Wang, Y. Flocking of quad-rotor UAVs with fuzzy control. ISA Trans. 2018, 74, 185–193. [Google Scholar] [CrossRef] [PubMed]
  30. Ntakolia, C.; Moustakidis, S.; Siouras, A. Autonomous path planning with obstacle avoidance for smart assistive systems. Expert Syst. Appl. 2023, 213, 119049. [Google Scholar] [CrossRef]
  31. Yu, H.; Zhang, T.; Jian, J. Flocking with obstacle avoidance based on fuzzy logic. In Proceedings of the IEEE ICCA 2010, Xiamen, China, 9–11 June 2010; pp. 1876–1881. [Google Scholar] [CrossRef]
  32. Sharma, A.K.; Singh, D.; Singh, V.; Verma, N.K. Aerodynamic Modeling of ATTAS Aircraft Using Mamdani Fuzzy Inference Network. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3566–3576. [Google Scholar] [CrossRef]
  33. Song, W.; Yang, Y.; Fu, M.; Qiu, F.; Wang, M. Real-Time Obstacles Detection and Status Classification for Collision Warning in a Vehicle Active Safety System. IEEE Trans. Intell. Transp. Syst. 2018, 19, 758–773. [Google Scholar] [CrossRef]
  34. Wu, J.; Ji, Y.; Sun, X.; Liang, W. Obstacle Boundary Point and Expected Velocity-Based Flocking of Multiagents with Obstacle Avoidance. Int. J. Intell. Syst. 2023, 2023, 1493299. [Google Scholar] [CrossRef]
Figure 1. The interaction framework between α -agents, γ -agent, and obstacles.
Figure 1. The interaction framework between α -agents, γ -agent, and obstacles.
Machines 13 00119 g001
Figure 2. The position and velocity of β -agents in the Olfati-Saber flocking (OSF) algorithm. (a) Infinite wall obstacle. (b) Spherical obstacle.
Figure 2. The position and velocity of β -agents in the Olfati-Saber flocking (OSF) algorithm. (a) Infinite wall obstacle. (b) Spherical obstacle.
Machines 13 00119 g002
Figure 3. The scenario of α -agent i being trapped in dynamic equilibrium.
Figure 3. The scenario of α -agent i being trapped in dynamic equilibrium.
Machines 13 00119 g003
Figure 4. Structure of fuzzy flocking control (FFC) algorithm.
Figure 4. Structure of fuzzy flocking control (FFC) algorithm.
Machines 13 00119 g004
Figure 5. Two membership functions of fuzzy variables.
Figure 5. Two membership functions of fuzzy variables.
Machines 13 00119 g005
Figure 6. Four subregions within the sensing radius of α -agent i.
Figure 6. Four subregions within the sensing radius of α -agent i.
Machines 13 00119 g006
Figure 7. Motion process of the two algorithms in spherical or wall obstacle scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Figure 7. Motion process of the two algorithms in spherical or wall obstacle scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Machines 13 00119 g007
Figure 8. The time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent over 100 runs in spherical or wall obstacle scenario.
Figure 8. The time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent over 100 runs in spherical or wall obstacle scenario.
Machines 13 00119 g008
Figure 9. Minimum distance between α -agent and objects over 100 runs in spherical or wall obstacle scenario.
Figure 9. Minimum distance between α -agent and objects over 100 runs in spherical or wall obstacle scenario.
Machines 13 00119 g009
Figure 10. Motion process of the two algorithms in spherical and wall obstacle combination scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Figure 10. Motion process of the two algorithms in spherical and wall obstacle combination scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Machines 13 00119 g010aMachines 13 00119 g010b
Figure 11. Distance between the center of α -agents and γ -agent in spherical and wall obstacle combination scenario.
Figure 11. Distance between the center of α -agents and γ -agent in spherical and wall obstacle combination scenario.
Machines 13 00119 g011aMachines 13 00119 g011b
Figure 12. Minimum distance between α -agent and objects in spherical and wall obstacle combination scenario.
Figure 12. Minimum distance between α -agent and objects in spherical and wall obstacle combination scenario.
Machines 13 00119 g012
Figure 13. Motion process of the FFC algorithm in complex obstacle scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Figure 13. Motion process of the FFC algorithm in complex obstacle scenario. The blue dots are the α -agents, the yellow solid line between blue dots is the neighborhood relationship between α -agents, the blue dotted lines are the motion trajectories of α -agents, and the green dot is the γ -agent.
Machines 13 00119 g013
Figure 14. The quantitative results of the FFC algorithm over 100 runs in complex obstacle scenario. (a) The time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent. (b) Minimum distance between α -agent and objects.
Figure 14. The quantitative results of the FFC algorithm over 100 runs in complex obstacle scenario. (a) The time taken for all α -agents to bypass obstacles and re-aggregate near the γ -agent. (b) Minimum distance between α -agent and objects.
Machines 13 00119 g014
Table 1. The rule base for fuzzy control.
Table 1. The rule base for fuzzy control.
d in VSSMLVL
ϑ out \ d out
ϑ in
NBP\SP\SP\LP\VLP\VL
NPB\SPB\SPB\MPB\VLPB\VL
ZP\SP \SP\MP\VLP\VL
PNB\SNB\SNB\MNB\VLNB\VL
PBN\SN\SN\LN\VLN\VL
Table 2. Experimental parameters.
Table 2. Experimental parameters.
ExperimentsParameters
A, B, and C c 1 α = c 2 α = c 1 β = c 2 β = c 3 β = c 1 γ = c 2 γ = 1 ,
d = d = 4 , r = 6 , a = 2 , b = 20 , λ = 0.8 ,
h = 0.2 , p i , m = 5 m / s , ϵ 1 = 0.5 , ϵ 2 = 0.5 ,
τ 1 = 10,000, τ 2 = 100 , τ 3 = 10 , τ 4 = 0.1 ,
d safe = 1 m , m = 10 , Δ t = 0.1   s , T m = 300 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, W.; Sun, X.; Ji, Y.; Liu, X.; Wu, J.; He, Z. Fuzzy Flocking Control for Multi-Agents Trapped in Dynamic Equilibrium Under Multiple Obstacles. Machines 2025, 13, 119. https://doi.org/10.3390/machines13020119

AMA Style

Liang W, Sun X, Ji Y, Liu X, Wu J, He Z. Fuzzy Flocking Control for Multi-Agents Trapped in Dynamic Equilibrium Under Multiple Obstacles. Machines. 2025; 13(2):119. https://doi.org/10.3390/machines13020119

Chicago/Turabian Style

Liang, Weibin, Xiyan Sun, Yuanfa Ji, Xinyi Liu, Jianhui Wu, and Zhongxi He. 2025. "Fuzzy Flocking Control for Multi-Agents Trapped in Dynamic Equilibrium Under Multiple Obstacles" Machines 13, no. 2: 119. https://doi.org/10.3390/machines13020119

APA Style

Liang, W., Sun, X., Ji, Y., Liu, X., Wu, J., & He, Z. (2025). Fuzzy Flocking Control for Multi-Agents Trapped in Dynamic Equilibrium Under Multiple Obstacles. Machines, 13(2), 119. https://doi.org/10.3390/machines13020119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop