Next Article in Journal
Mutual Knowledge Distillation-Based Communication Optimization Method for Cross-Organizational Federated Learning
Previous Article in Journal
An Efficient Multi-Output LUT Mapping Technique for Field-Programmable Gate Arrays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modular Prescribed Performance Formation Control Scheme of a High-Order Multi-Agent System with a Finite-Time Extended State Observer

1
School of Mechanical and Power Engineering, Nanjing Tech University, Nanjing 211816, China
2
College of Transportation Engineering, Nanjing Tech University, Nanjing 211816, China
3
College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing 211816, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(9), 1783; https://doi.org/10.3390/electronics14091783
Submission received: 22 March 2025 / Revised: 22 April 2025 / Accepted: 25 April 2025 / Published: 27 April 2025
(This article belongs to the Special Issue Coordination and Communication of Multi-Robot Systems)

Abstract

:
This paper proposes a modular control framework for high-order nonlinear multi-agent systems (MASs) to achieve distributed finite-time formation tracking with a prescribed performance. The design integrates two modules to address uncertainties and safety constraints simultaneously. Module I—Prescribed Performance-Based Trajectory Generation: A virtual signal generator constructs collision/connectivity-aware reference trajectories by encoding time-varying performance bounds into formation errors. It ensures network rigidity and optimal formation convergence through dynamic error transformation. Module II—Anti-disturbance Tracking Control: A finite-time extended state observer (FTESO) estimates and compensates for uncertainties within a finite time, while a time-varying surface controller drives tracking errors into predefined performance funnels. This module guarantees rapid error convergence without violating the transient constraints from Module I. The simulations verified the accelerated formation reconfiguration under disturbances, and thus, demonstrated improved robustness and convergence over asymptotic approaches. The framework offers a systematic solution for safety-critical MAS coordination with heterogeneous high-order dynamics.

1. Introduction

The cooperative control of multi-agent systems (MASs) has attracted significant research attention in recent years, driven by broad applications ranging from robotic swarms to autonomous vehicles [1,2]. In particular, distributed consensus and formation control problems have been extensively studied under various control paradigms [3,4]. Researchers worldwide have proposed a wide array of approaches—including adaptive control, sliding-mode control, H control, and finite-time control—to coordinate multiple agents in a desired formation [5,6,7]. While these methods have achieved notable success, the practical deployment of MASs still faces major challenges due to real-world uncertainties and operational constraints [8,9].
One fundamental challenge is the presence of model uncertainties and external disturbances acting on each agent. In many scenarios, accurate system models are difficult to obtain and agents are subject to unpredictable environmental disturbances [10,11]. To enhance the robustness, numerous estimation and compensation techniques have been explored. For example, neural networks, disturbance observers, and fuzzy logic systems have been used to learn or cancel unknown dynamics [12,13]. Among these, the extended state observer (ESO) has emerged as a powerful tool for the real-time estimation of unmodeled dynamics and disturbances [14,15]. ESO-based control schemes have been successfully applied in various multi-agent and robotic systems [16,17], including designs that achieve finite-time convergence in the observer for faster disturbance estimation [18]. However, a common limitation of many existing ESO-based controllers is that they guarantee only asymptotic stability [19]. In other words, the tracking errors eventually converge to zero but without a specified transient bound or settling time, and thus, the transient performance of the system cannot be assured. This lack of performance guarantee can be critical in missions that demand strict overshoot limits or fast convergence [20]. Achieving distributed finite-time formation tracking for a high-order nonlinear MAS is challenging, especially when collision avoidance, connectivity maintenance, and precise transient performance must be simultaneously ensured. Traditional methods typically offer asymptotic convergence without strict transient bounds and often neglect practical disturbances. Thus, designing a robust, modular controller capable of finite-time convergence under prescribed performance constraints remains a crucial open problem in MAS formation control research.
In addition to uncertainties, MAS formation control must consider geometric and communication constraints. Agents have limited communication ranges and must avoid collisions with each other during movement. Thus, neighboring agents are required to remain within a certain distance range to maintain network connectivity and prevent collisions [21]. These constraints impose performance bounds on the inter-agent distance errors and add complexity to the control design. Ensuring that such safety and connectivity constraints are respected at all times calls for a control strategy that can enforce time-varying limits on the system’s error variables [22]. An effective framework to address transient performance requirements and enforce error constraints is the prescribed performance control (PPC) methodology. PPC introduces a prescribed performance function (PPF) that defines allowable error evolutions, typically by confining the tracking error within a decaying boundary layer [23]. This ensures the error starts within a specified bound and decreases according to a desired performance profile, leading to a bounded overshoot and improved transient and steady-state behaviors [24]. The prescribed performance approach has been successfully applied in numerous nonlinear control systems and multi-agent coordination tasks [25]. For instance, Dai et al. (2022) developed an adaptive formation controller for nonholonomic mobile robots that guarantees predefined transient and steady-state error bounds using a prescribed performance technique [26]. By enforcing such performance constraints, PPC-based designs can maintain formation geometry and connectivity requirements while the agents converge to the desired configuration. However, most existing PPF/PPC schemes ensure convergence only asymptotically. The tracking errors approach zero as time approaches infinity, meaning the design does not guarantee a finite settling time [27].
The modular control approach proposed by Li et al. (2021) has been shown to effectively reduce the complexity of control systems [28]. This approach involves decomposing the distributed optimal formation controller into two modules: a signal generator (Module I) and a tracking controller (Module II). This decomposition has been demonstrated to simplify the complexity of the control system, enhance its flexibility, and improve its maintainability. The realization of a multi-intelligent body system is achieved to minimize global costs while ensuring formation accuracy, offering a novel approach for the cooperative control of heterogeneous systems.
Motivated by the above challenges, our proposed approach offers significant advantages over traditional adaptive continuous control methods, which typically guarantee only asymptotic convergence and do not provide explicit bounds on transient performance. Unlike these methods, our modular control scheme ensures finite-time convergence with prescribed performance guarantees, allowing for more predictable and efficient formation tracking. Additionally, the integration of a finite-time extended state observer (FTESO) enables robust disturbance estimation, which is crucial for handling uncertainties in real-world environments, further distinguishing our method from adaptive continuous approaches. This makes our approach more suitable for safety-critical applications where precise control and disturbance rejection are essential. The main contributions of this work are summarized as follows:
1.
Distributed signal generator for formation: We developed a novel distributed signal generator mechanism to coordinate the formation. This virtual reference generator produces trajectory signals that each agent tracks, enabling the group of high-order agents to achieve the desired formation shape. Unlike traditional leader–follower schemes, the signal generator approach provides a unified reference without requiring a designated leader, and it inherently facilitates synchronization between agents.
2.
Finite-time extended state observer (FTESO): A finite-time extended state observer was designed for each agent to estimate the compounded effect of model uncertainties and external disturbances in real time. The observer was proven to converge in finite time, providing fast and accurate estimates of unmeasured states (such as higher-order derivatives and disturbance forces). These estimates were used to actively compensate for the uncertainties in the control law, which substantially improved the robustness.
3.
Prescribed performance control with connectivity guarantees: We formulated a distributed control law that incorporates the prescribed performance bounds into the feedback loop. By embedding the signal generator outputs as reference trajectories and utilizing the FTESO estimates, the proposed controller drives each agent’s tracking error to remain within a predefined performance funnel and reach zero in finite time. This ensures that both transient and steady-state performance specifications are met. Notably, by carefully designing the performance functions (constraints on the tracking errors), the controller guarantees connectivity maintenance and collision avoidance throughout the formation maneuver. Inter-agent distance errors are confined within safe limits so that all agents stay connected and no collisions occur by design.
In summary, the proposed control framework integrates a finite-time disturbance observer with prescribed performance control in a distributed MAS setting for the first time. It provides rigorous performance guarantees (both transient and finite-time convergence) for high-order nonlinear agents while explicitly addressing practical constraints like communication range and safety distances. Comparative theoretical analysis and numerical simulations are presented in the sequel to validate the effectiveness of the approach. The results demonstrate that our scheme achieved precise formation tracking with a superior convergence speed and guaranteed performance, overcoming the limitations of existing methods.

2. Preliminaries

2.1. Graphs and Rigidity Theory

An undirected graph with n vertices and m edges can be represented as G = ( V , E ) , where V = 1 , 2 , , n and E V × V denote the vertex set and edge set, respectively. The incidence matrix H = [ h i j ] R n × m corresponds to the vertices and edges of graph G, with rows representing vertices and columns representing edges. For convenience, we introduce a directed graph incidence matrix H. In this matrix, the column corresponding to edge [ i , j ] E has 1 in the i-th row, −1 in the j-th row, and 0 elsewhere. In d-dimensional space, graph G can describe a multi-agent formation by assigning each vertex a coordinate p i R d ( i V ) . Rigidity theory studies whether the graph G (multi-agent formation) can undergo rigid transformations (translation, rotation, reflection). Let vector p = [ p 1 T , p 2 T , , p n T ] T R n × d represent the realization or configuration of graph G. The framework F = [ G , p ] has its rigidity function defined as
g G ( p ) = [ , p i , j 2 , ] T , [ i , j ] E
where p i , j = p i p j . The rigidity function g G ( p ) is key to characterizing the framework F = [ G , p ] .
Definition 1 ([29]).
In the m-dimensional space R m , a framework F = [ G , p ] is rigid if, during any continuous motion, the framework satisfies F ( t ) I s o ( F ) for all t 0 , meaning that the distances between all robots remain unchanged throughout the motion.
Each rigidity graph corresponds to a rigidity matrix, and the rigidity matrix of the framework F is defined as R = 1 2 g G ( p ) p . Each row of the rigidity matrix corresponds to an edge, and each column corresponds to a vertex. Consequently, each row of the rigidity matrix R corresponding to the framework F takes the following form:
[ 0 1 × d T , , p i , j T , , 0 1 × d T , , p i , j T , 0 1 × d T ]
Definition 2 ([30]).
A framework F = [ G , p ] is infinitesimally rigid in a d-dimensional space if r a n k ( R ) = d n d ( d + 1 ) 2 , where n is the number of vertices in the graph. This condition ensures that the framework cannot undergo non-trivial infinitesimal motions that preserve the distances between all pairs of connected agents.
Lemma 1 ([31]).
For any framework F = [ G , p ] that is both minimally rigid and infinitesimally rigid in an d-dimensional space, the matrix R R T , formed by the product of the rigidity matrix R and its transpose, is guaranteed to be positive definite.

2.2. Problem Formulation

Consider a multi-agent system consisting of n agents. The dynamics of the i-th agent, ( i = 1 , , n ) , is described by the following high-order nonlinear equations:
x ˙ i , k = x i , k + 1 , k = 1 , , m 1 x ˙ i , m = u i + f i ( x i ) + d i ,
where m 1 is the order of the multi-agent system, x i = [ x i , 1 , , x i , M ] T are the state variables of the system, u i are the control inputs, f i are the model uncertainties, d i are the disturbances, and m is the system dimension.
The control objective of this study was to develop a decentralized robust control framework for multi-agent formation systems that achieves precise formation convergence while maintaining critical operational constraints. These include prescribed performance constraints, which ensure that formation-tracking errors converge within specified bounds on the overshoot, settling time, and steady-state error, and safety constraints, such as maintaining connectivity between agents and enforcing collision avoidance by ensuring that inter-agent distances remain above a predefined minimum. The proposed modular control scheme guarantees finite-time convergence while satisfying these constraints, ensuring that the agents remain within safe operational limits and track the desired formation with the desired performance. The FTESO enables robust disturbance estimation, further enhancing the system reliability.

3. Distributed LESO-Based Prescribed Performance Design

3.1. Distributed Prescribed Performance Signal Generator Design

In this subsection, the design of the distributed prescribed performance signal generator based on a virtual first-order multi-agent system is described. The virtual first-order multi-agent system is described as
φ ˙ i = ς i , i = 1 , , n .
where φ i = [ φ i , 1 , , φ i , m ] T are the virtual system states, ς i = [ ς i , 1 , , ς i , m ] T are the virtual system control inputs, and y i φ = φ i are the system outputs. Let the desired formation be defined by a minimally and infinitesimally rigid framework F * = { G * , φ * } , where G * = { V * , E * } , d i m ( V * ) = n , d i m ( E * ) = l , and φ * = c o l ( φ i * ) . Let the desired formation distances between agent i and its neighboring agents in F * be given by Δ i , j = φ i * φ j * > 0 , where i , j V * . Denote the relative position distances between neighboring agents as
φ i , j = φ i φ j , ( i , j ) E * .
Hence, for each edge in the rigid graph, the formation distance error is given by
e i , j = φ i , j Δ i , j , ( i , j ) E * .
It is obvious from Equation (6) that e i , j [ Δ i , j , ) . Next, the collision avoidance and connectivity maintenance strategies between neighboring agents based on e i , j are discussed.
In practice, since each agent has a limited communication range, neighboring agents must remain within their common communication radius during operation. At the same time, no collision should occur between the agents during the movement toward the desired formation. In general, to solve these two problems, we assumed that each agent is surrounded by a small circular safety region, and a large circular communication region. As shown in Figure 1, x i and x j denote the positions of the i-th and j-th agents, and r i and r j denote the safety regions of the two agents, respectively. R i and R j denote the communication ranges of the two agents, respectively.
Define r i , j = r i + r j > 0 , ( i , j ) E * and R i j = m i n ( R i + r j , R j + r i ) > 0 , ( i , j ) E * . In order to fulfill the requirements of collision avoidance and connectivity maintenance, r i , j < φ i , j < R i , j , ( i , j ) E * , must be satisfied. Introducing the formation distance error e i , j into the above equation yields r i , j Δ i , j < e i , j < R i , j Δ i , j , ( i , j ) E * . Next, we set the following performance constraints for e i , j :
e ̲ i , j < e i , j < e ¯ i , j , ( i , j ) E * ,
where e ̲ i , j and e ¯ i , j were designed to satisfy e ̲ i , j ( 0 ) Δ i , j r i , j and e ¯ i , j ( 0 ) R i , j Δ i , j . This ensures connectivity maintenance and collision avoidance between neighboring agents.
Define the squared distance error ϖ i , j = φ i , j 2 Δ i , j 2 , ( i , j ) E * ; then, according to (6), it follows that
ϖ i , j = e i , j ( φ i , j + Δ i , j ) , ( i , j ) E * .
Since e i , j [ Δ i , j , ) , it is easy to see from (8) that ϖ i , j = 0 if and only if e i , j = 0 . Thus, we used the squared distance error ϖ i , j , rather than e i , j directly, to define a concrete prescribed performance bound. In this study, we used a performance function ϱ i , j , which is given as in Section 3.2. The goal of ensuring both transient and steady-state performance is achieved if the following condition is always fulfilled:
ϖ ̲ i , j < ϖ i , j < ϖ ¯ i , j , ( i , j ) E * ,
where ϖ ̲ i , j = h ̲ ϱ i , j and ϖ ¯ i , j = h ¯ ϱ i , j , in which h ̲ , h ¯ > 0 are positive parameters. Moreover, according to (8), e ̲ i , j ( 0 ) and e ¯ i , j ( 0 ) can also be associated with ϖ ̲ i , j ( 0 ) = h ̲ ϱ i , j , 0 and ϖ ¯ i , j ( 0 ) = h ¯ ϱ i , j , 0 in the following relations:
2 Δ i , j e ̲ i , j ( 0 ) e ̲ i , j 2 ( 0 ) = h ̲ i , j ϱ i , j , 0 , e ¯ i , j 2 ( 0 ) + 2 Δ i , j e ¯ i , j ( 0 ) = h ¯ i , j ϱ i , j , 0 .
Thus, if e ̲ i , j ( 0 ) and e ¯ i , j ( 0 ) fulfill the previous condition, then we can always assume that ϱ i , j , 0 = 1 and find a suitable h ̲ i , j and h ¯ i , j that guarantees the performance requirements in (10) for connection maintenance and collision avoidance between neighboring agents.
According to (6) and (8), one can obtain the formation distance error dynamics as e ˙ i j = φ i , j T ( ς i ς j ) / ( e i , j + Δ i , j ) , ( i , j ) E * . The time derivative of ϖ i , j is given by
ϖ ˙ i , j = 2 φ i , j T ( ς i ς j ) , ( i , j ) E * .
To deal with the time-varying constraints in (9), we transformed the original dynamic error system with constraints into a new equivalent unconstrained system that can be ensured to satisfy the limits given in (10). First, consider the following standard error:
Λ i , j = ϖ i , j / ϱ i , j , ( i , j ) E * .
The following error transformations are introduced to transform the constrained error system into an equivalent unconstrained one:
ξ i , j = 1 2 l n h ¯ i , j Λ i , j + h ¯ i , j h ̲ i , j h ¯ i , j h ̲ i , j h ̲ i , j Λ i , j , ( i , j ) E * .
It is clear that ϖ i , j 0 if and only if ξ i , j 0 . Now, taking the time derivative of ξ i , j yields ξ ˙ i , j = 1 2 p i , j ( ϖ ˙ i , j Λ i , j ϱ ˙ i , j ) , where
p i , j = 1 ϱ i , j 1 Λ i , j + h ̲ i , j 1 Λ i , j h ¯ i , j , ( i , j ) E * .
The control input for each virtual agent can be expressed as
ς i = j N i ( E * ) k i , j p i , j ξ i , j φ i , j , ( i , j ) E * .
where k i , j > 0 . Then, the closed-loop system is
φ ˙ i = j N i ( E * ) k i , j p i , j ξ i , j φ i , j , ( i , j ) E * .
Based on the structure of the rigidity matrix, ϖ ˙ i , j , ξ ˙ i , j , and ς i can be written in compact forms as
ϖ ˙ = 2 R ( φ ) ς ,
ξ ˙ = 1 2 p ( ϖ ˙ ϱ ˙ Λ ) ,
ς = R ( φ ) T p K ξ ,
where ς = c o l ( ς i , j ) , ϖ = c o l ( ϖ i , j ) , ξ = c o l ( ξ i , j ) , p = d i a g ( p i , j ) , ϱ ˙ = d i a g ( ϱ ˙ i , j ) , and Λ = c o l ( Λ i , j ) for ( i , j ) E * and K = d i a g ( k i , j ) .
It has been demonstrated that the existing methodology is capable of static formation tracking; however, in real-life scenarios, dynamic formation tracking is often necessary. In accordance with the extant literature [32], the external reference speed command term n M L v 0 is introduced on the basis of the signal generator (18), i.e.,
ς = R ( φ ) T p K ξ + n M L v 0 ,
This architecture employs a leader selection matrix M L to inject velocity commands via leader node actuation while maintaining formation stability and v 0 is the external reference speed.
Proposition 1. 
If the graph G * and framework F * is minimally and infinitesimally rigid and under the controller (19), for a closed-loop virtual multi-agent system (15), the virtual agents’ outputs y i φ = φ i reach a consensus with respect to satisfying the prescribed performance requirement (9) and lead to connectivity maintenance and collision avoidance between neighboring agents.
Proof. 
Define the overall candidate Lyapunov function:
V ( ξ ) = 1 2 ξ T K ξ
Employing (16) and (17), the time derivative of V ( ξ ) is given by
V ˙ ( ξ ) = ξ T K p R ( φ ) ς 1 2 ξ T K p ϱ ˙ Λ .
Substituting (15) into V ˙ ( ξ ) and employing Lemma 1 yields
V ˙ ( ξ ) = ξ T K p R R T p K ξ + ξ T K p R n M L v 0 1 2 ξ T K p ϱ ˙ Λ .
With the help of Young’s inequality, we can obtain ξ T K p R n M L v 0 0.5 ( ξ T K p R R T p K ξ ) + 0.5 n M L v 0 2 . Let ε be an arbitrarily small positive constant satisfying λ m i n ( R R T ) > ε 2 / 2 . Invoking Young’s inequality on 1 2 ξ T K p ϱ ˙ Λ and exploiting the diagonality of the K, p, and ϱ matrices, we obtain
V ˙ ( ξ ) λ m i n ( R R T ) 2 ε 2 4 ξ T K p p K ξ + 1 4 ε 2 Λ T ϱ ˙ 2 Λ + n M L v 0 2 2 λ m i n ( R R T ) 2 ε 2 4 λ m i n ( p 2 ) λ m i n ( K 2 ) ξ 2 + n M L v 0 2 2 + 1 4 ε 2 λ m a x ( ϱ ˙ 2 ) Λ 2 λ ξ 2 + n M L v 0 2 2 + γ 4 ε 2 λ m a x ( ϱ ˙ 2 ) Λ 2
where λ = λ m i n ( R R T ) 2 ε 2 4 λ m i n ( p 2 ) λ m i n ( K 2 ) > 0 and γ = j N i ( E * ) m a x { h ̲ i , j 2 , h ¯ i , j 2 } > j N i ( E * ) Λ i , j 2 = Λ 2 . Assume that ϑ = c o l ( n M L v 0 , λ m a x ( ϱ ˙ 2 ) ) . Since λ m a x ( ϱ ˙ 2 ) , n M L v 0 2 and Λ 2 are bounded. Using Lemma 11.3 in [33] yields V ˙ λ ξ 2 + β ( ϑ ) , where β is a K -function. Therefore, ξ is Input-to-State Stable (ISS) with respect to input ϑ . Due to the ISS property and the boundedness of ϑ , there exists an ultimate bound ξ ¯ > 0 such that ξ ξ ¯ . Based on this result, it is clear that the control input (15) is also bounded. Now, using ξ ¯ and taking the inverse logarithmic function in (13) leads to
h ̲ i . j < h ¯ i , j ( 1 e x p ( 2 ξ ¯ ) ) h ̲ i , j e x p ( 2 ξ ¯ ) + h ¯ i , j h ̲ i , j = Λ ̲ i , j Λ i , j Λ ¯ i , j = h ̲ i , j ( e x p ( 2 ξ ¯ 1 ) ) h ̲ i , j e x p ( 2 ξ ¯ ) + h ¯ i , j h ¯ i , j < h ¯ i , j
Finally, multiplying (23) by ϱ i , j results in h ̲ i , j ϱ i , j < Λ ̲ i , j ϱ i , j ϖ i , j Λ ¯ i , j ϱ i , j < h ¯ i , j ϱ i , j , for ( i , j ) E * , which further guarantees (8). □

3.2. FTESO-Based Prescribed Performance Tracking Controller Design

FTESO-based prescribed performance tracking controllers are designed for the agents of original multi-agent system (4) such that the agents’ outputs y i = x i , 1 , i = 1 , , n track the reference outputs y i φ = φ i generated from the distributed prescribed performance signal generator (14). Since there are model uncertainties and disturbance (so-called ‘total disturbance’) in the agents’ models of original multi-agent system (5), the effects of the total disturbance need to be dealt with. Considering this aspect, the tracking controllers are designed into feedforward–feedback composite types and the design is composed of a FTESO design and the prescribed performance tracking controller design.

3.2.1. FTESO Design

By letting G i ( t ) = f i ( x i ) + d i , x i , M + 1 = G i ( t ) , and G ˙ i ( t ) = g i ( t ) , the multi-agent system (5) can be written in an augmented state space form:
x ˙ i , 1 = x i , k + 1 , k = 1 , , m 1 x ˙ i , m = x i , m + 1 + u i , x ˙ i , m + 1 = g i ( t ) , i = 1 , , n
Assumption 1. 
The total disturbance G i ( t ) = f i ( x i ) + d i acting on each agent is differentiable and bounded. Specifically, there exists a known positive constant c such that G i ( t )   c , t 0 .
In many real world scenarios, the plant dynamics represented by g i ( t ) is mostly unknown. In this case, the FTESO of (26) is given as
x ^ ˙ i , k = x ^ i , k + 1 α i , k s i g ( β i + 1 ) / 2 ( x i , 1 x ^ i , 1 ) , k = 1 , , m 1 , x ^ ˙ i , m = x ^ i , m + 1 α i , m s i g ( β i + 1 ) / 2 ( x i , 1 x ^ i , 1 ) + u i , x ^ ˙ i , m + 1 = α i , k + 1 s i g ( β i + 1 ) / 2 ( x i , 1 x ^ i , 1 ) , i = 1 , , n
where x ^ i = [ x ^ i , 1 , , x ^ i , M , x ^ i , M + 1 ] T are the state variables of the FTESO, and 0 < β i < 1 and α i , k , k = 1 , , m + 1 , are appropriate parameters to be chosen.
Let x ˜ i , k = x ^ i , k x i , k , k = 1 , , m + 1 , i = 1 , , n . From (26) and (27), the observer estimation error can be shown as
x ˜ ˙ i , k = x ˜ i , k + 1 α i , k s i g ( β i + 1 ) / 2 ( x ˜ i , 1 ) , k = 1 , , m x ˜ ˙ i , m + 1 = g i ( t ) α i , m + 1 s i g ( β i + 1 ) / 2 ( x ˜ i , 1 ) , i = 1 , , n
It follows from Lemma 2 that the observation errors x ˜ i , k are finite-time stable.
Lemma 2 ([34]).
Under the assumption that the disturbances are bounded, and subject to the conditions imposed by the observer gain, it is established that system (28) will converge in a finite time.

3.2.2. Prescribed Performance Tracking Controller Design

By embedding the outputs z i , i = 1 , , N , of signal generator (16) into the feedback loop and setting them as the references of the real agent, denote the tracking errors e i , 1 = x ^ i , 1 z i and e i , k = x ^ i , k , k = 2 , , m , i = 1 , , n .
To deal with the high-order dynamics, certain time-varying surfaces over the neighborhood error are defined as
s i ( e i , 1 , , e i , m ) = k = 0 m 1 m 1 k λ k e i , m k
Assume that | s i |   < ϱ i , i = 1 , , n for all t 0 , where ϱ i are performance functions defined in Definition 1. The appropriate selection of the design parameters of the performance function guarantees the convergence of the disagreement variables e i , k , k = 1 , , m ,   i = 1 , , n , arbitrarily close to the origin.
The following theorem captures the main achievement of this work. It proposes a FTESO-based prescriptive performance-tracking controller that achieves convergence of the tracking error arbitrarily close to the origin by guaranteeing that | s i |   < ϱ i , i = 1 , , n , for all t 0 .
Theorem 1. 
If Assumption 1 holds, under tracking controllers
u i = k i 2 ϱ i l n 1 + s i ϱ i 1 s i ϱ i 1 + s i ϱ i 1 s i ϱ i x ^ i , m + 1 , i = 1 , , n
with k i > 0 , i = 1 , , n , and prescribed performance signal generator (16) for the multi-agent system (4), the agents’ outputs track the signal generator outputs in formation. Then, the prescribed performance formation control goal is achieved.
Proof. 
To prove our concept, we first define the normalized error vector:
Λ i = s i ϱ i = ϱ i 1 s i , i = 1 , , n .
Differentiating Λ i with respect to time, the following is obtained:
Λ ˙ i = ϱ i 1 ( s ˙ i ϱ ˙ i Λ i ) , i = 1 , , n .
Differentiating (29) with respect to time and exploiting the dynamics of the agents from (4), s ˙ i becomes
s ˙ i = x ^ i , m + 1 α i , m s i g β i + 1 2 ( x ˜ i , 1 ) + u i z ˙ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) ) , i = 1 , , n .
Substituting (33) and the distributed control protocol (30) into (32), the closed loop system is obtained:
Λ ˙ i = ϱ i 1 [ x ^ i , m + 1 α i , m s i g β i + 1 2 ( x ˜ i , 1 ) k i ϱ i 1 r i ξ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) ) x ^ i , m + 1 z ˙ i ϱ ˙ i Λ i ] , i = 1 , , n .
where r i = 1 ( 1 + Λ i ) ( 1 Λ i ) and ξ i = 1 2 l n 1 + Λ i 1 Λ i . To continue, let us define the positive-definite function
V ξ i = 1 2 ξ i 2 , i = 1 , , n .
Notice from (31) to (34) that
ξ ˙ i = r i ϱ i 1 [ x ^ i , m + 1 α i , m s i g β i + 1 2 ( x ˜ i , 1 ) k i ϱ i 1 r i ξ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) ) x ^ i , m + 1 z ˙ i ϱ ˙ i Λ i ] , i = 1 , , n .
Hence, differentiating (35) with respect to time and substituting (36), the following is obtained:
V ˙ ξ i = ξ i r i ϱ i 1 [ α i , m s i g β i + 1 2 ( x ˜ i , 1 ) k i ϱ i 1 r i ξ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) ) z ˙ i ϱ ˙ i Λ i ] = k i ϱ i 2 r i 2 ξ i 2 + ξ i r i ϱ i 1 [ α i , m s i g β i + 1 2 ( x ˜ i , 1 ) z ˙ i ϱ ˙ i Λ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) ) ]
Owing to the boundedness of Λ i ( 1 , 1 ) and the boundedness of z ˙ i , ϱ ˙ i , and x ˜ i , 1 either by assumption or by construction, the application of the Extreme Value Theorem guarantees the existence of a positive constant H ¯ such that
α i , m s i g β i + 1 2 ( x ˜ i , 1 ) z ˙ i ϱ ˙ i Λ i + k = 0 m 1 m 1 k λ k ( e i , m k + 1 α i , m k s i g β i + 1 2 ( x ˜ i , 1 ) )     H ¯ .
Thus, with the help of Young’s inequality, V ˙ ξ i becomes
V ˙ ξ i k i ϱ i 2 r i 2 ξ i 2 + ξ i r i ϱ i 1 H ¯ ( k i 1 2 ) ϱ i 2 r i 2 ξ i 2 + 1 2 H ¯ 2
Moreover, it can be easily deduced that r i 1 and ϱ i 1 λ ϱ i = 1 / ϱ i , 0 by construction. Hence, after some straightforward algebraic manipulations, the following is obtained:
V ˙ ξ i K i λ ϱ i 2 ξ i 2 + 1 2 H ¯ 2
where K i = k i 1 2 . Therefore, V ˙ ξ i < 0 when ξ i   > H ¯ 2 2 K i λ ϱ i 2 . Thus, it is concluded that
ξ i   ξ i ¯ = m a x ξ i ( Λ i ( 0 ) ) , H ¯ 2 2 K i λ ϱ i 2 ,
which by taking the inverse logarithmic function in (29) (i.e., the hyperbolic tangent function) leads to
1 < t a n h ( ξ i ¯ ) = Λ ̲ i Λ i Λ ¯ i = t a n h ( ξ i ¯ ) < 1 ,
for i = 1 , , n . Moreover, the control signals (29) remain bounded by the boundedness of x ^ i , m + 1 :
| u i |   u i ¯ = k i ξ i ¯ ϱ i , ( 1 + t a n h ( ξ i ¯ ) ) ( 1 t a n h ( ξ i ¯ ) ) , i = 1 , , n
Finally, multiplying (41) by ϱ i , the following is also concluded for i = 1 , , n :
ϱ i < Λ ̲ i ϱ i s i Λ ¯ i ϱ i < ϱ i
This ensures that e i , k , k = 1 , , m , i = 1 , , n , is arbitrarily close to the origin. □
Remark 1. 
Recent studies have investigated the collective behaviors of multi-agent systems under switching topologies and extended neighbor interaction rules, significantly enhancing the adaptability in dynamically changing environments [35,36]. In contrast to these adaptive-topology consensus methods, our approach is tailored specifically for formation tracking under a fixed, undirected topology. We introduce a modular controller that integrates finite-time disturbance estimation with prescribed transient and steady-state performance bounds, effectively guaranteeing collision avoidance, connectivity, and precise formation accuracy in safety-critical tasks.
Remark 2. 
Extending our approach to general directed graphs presents certain challenges, as the assumptions made in our current work rely on an undirected communication topology. In directed graphs, the lack of mutual connectivity between agents may complicate the enforcement of the prescribed performance bounds and the maintenance of safety constraints, such as collision avoidance. Specifically, a directed graph could require additional mechanisms to ensure that each agent remains within the communication range of its neighbors and that the formation constraints are respected. Overcoming these challenges would involve developing more complex control laws and a distributed strategy for ensuring the necessary connectivity and performance guarantees, which is an important direction for future work.

4. Numerical Simulation

In this section, four double-link robotic arms were utilized as actual agents, and four virtual agents were designed as signal generators to verify the effectiveness of the control algorithm.
First, we considered a set of five virtual agents, as depicted by (5) in two dimensions. The desired configuration of this set was hypothesized to be in the formation of a square. This hypothesized square was defined by the minimally and infinitesimally rigid graph with the edge set E * = { ( 1 , 2 ) , ( 1 , 3 ) , ( 1 , 4 ) , ( 2 , 3 ) , ( 3 , 4 ) } . We assumed that the desired distances between the neighboring agents in the rigid framework were Δ 1 , 2 = Δ 1 , 4 = Δ 2 , 3 = Δ 3 , 4 = 1 and Δ 1 , 3 = 1.414 . The initial positions of the four virtual agents were set to φ 1 ( 0 ) = [ 0.3639 , 0.6361 ] , φ 2 ( 0 ) = [ 1.7026 , 0.4526 ] , φ 3 ( 0 ) = [ 0.4919 , 0.2706 ] , and φ 4 ( 0 ) = [ 2.0789 , 0.0179 ] . Meanwhile, in this simulation, we assumed that all the virtual agents had the same geometric shape and perceptual radius, i.e., r i = 0.2 and R i = 5 . The external reference speed was set to v 0 = [ s i n ( 0.5 t ) ; c o s ( 0.5 t ) ] .
Then, we considered an actual agent system that consisted of four double-link robotic arms. Each dynamics model of the four two-link robotic arms could be described by an EL equation as follows:
M i ( q i ) q ¨ i + C i ( q i , q ˙ i ) q ˙ i + G i ( q i ) = τ i + d i , i = 1 , 2 , 3 , 4
Denote x i 1 = q i and x i , 2 = q ˙ i ; then, (45) can be rewritten as
x ˙ i , 1 = x i , 2 x ˙ i , 2 = x i , 3 + M i 1 τ i x ˙ i , 3 = h i .
where x i , 3 = M i 1 ( C i q i + d i ) .
The system matrices were given as
M i = a i 1 + a i 2 + 2 a i 3 cos ( q i 2 ) a i 2 + a i 3 cos ( q i 2 ) a i 2 + a i 3 cos ( q i 2 ) a i 2 ,
C i = a i 3 sin ( q i 2 ) q ˙ i 2 a i 3 sin ( q i 2 ) ( q ˙ i 1 + q ˙ i 2 ) a i 3 sin ( q i 2 ) q ˙ i 1 0 ,
G i = a i 3 g cos ( q i 1 ) + a i 5 g cos ( q i 1 + q i 2 ) a i 5 g cos ( q i 1 + q i 2 ) , and the vector
θ i = c o l ( a i 1 , a i 2 , a i 3 , a i 4 , a i 5 ) .
For i = 1 , 2 , 3 , 4 , the actual values of θ i were given as follows:
θ 1 = c o l ( 0.64 , 1.10 , 0.08 , 0.64 , 0.32 ) ; θ 2 = c o l ( 0.76 , 1.17 , 0.14 , 0.96 , 0.44 ) θ 3 = c o l ( 0.91 , 1.26 , 0.22 , 1.27 , 0.58 ) ; θ 4 = c o l ( 1.10 , 1.36 , 0.32 , 1.67 , 0.73 )
We assumed that the initial position state of each robotic arm was chosen as x 1 , 1 = [ 0.3 , 1.4 ] , x 2 , 1 = [ 0.2 , 0.8 ] , x 3 , 1 = [ 0.6 , 0.5 ] , and x 4 , 1 = [ 0.3 , 0.7 ] , and the velocity states were all set to zero. The parameters of the PPF, FTESO, and controller were chosen as ρ 0 = 6 ; ρ = 0.03 ; k i , j = 0.2 , ( i , j ) E ; α i , 1 = 7 ; α i , 2 = 15 ; α i , 1 = 10 ; β i = 0.5 ; λ = 5 ; and k i = 5 , i = 1 , 2 , 3 , 4 .
The simulation results for the signal-generator-based distributed prescribed performance control algorithm are depicted in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
Figure 2 illustrates the trajectories of the four virtual agents under the control of the proposed signal generator. Starting from their respective initial positions, the agents progressively converged to the desired formation. Throughout the simulation process, safe distances were maintained between the agents, which ensured collision avoidance and connectivity maintenance, and thus, demonstrated the effectiveness of the proposed formation control method.
Figure 3 shows the evolution of formation errors for the virtual agents generated by the signal generator. The errors remained strictly confined within the prescribed performance envelope and rapidly converged toward zero. These results confirm the effectiveness of the proposed prescribed performance control at the virtual-trajectory-generation level, which ensured desirable transient and steady-state performance.
Figure 4 presents the time evolution of control input signals utilized to drive the virtual agents. Initially, the control inputs exhibited larger magnitudes to rapidly correct initial discrepancies, which subsequently decreased smoothly to steady levels. This behavior validated the capability of the signal generator to produce stable and smooth control commands for trajectory formation.
The trajectories of the actual agents (robotic manipulators) are presented in Figure 5, depicting their responses to the reference trajectories provided by the virtual agents. The actual agents exhibited stable convergence toward the desired formation configuration, where they accurately replicated the reference trajectories. This outcome highlights the precision and efficacy of the proposed tracking control strategy for real nonlinear systems.
Figure 6 presents the tracking errors of the actual agents relative to the reference trajectories, which remained strictly within the predefined performance bounds. Initially, significant errors diminished rapidly, where they eventually stabilized near zero. This indicates that the designed tracking controller effectively enforced the prescribed error constraints to ensure a high tracking accuracy.
Figure 7 displays the velocity profiles of the actual agents over time. The velocity trajectories were smooth and continuous, where they initially increased to correct position errors and subsequently decreased toward zero to avoid overshooting and oscillations. The smooth velocity transitions validated the capability of the proposed controller to effectively regulate the motion of actual agents, which ensured stability and smoothness during the tracking process.
Figure 8 depicts the estimation errors of the Finite-Time Extended State Observer (FTESO) over time. These errors converged swiftly to near zero in a finite time, demonstrating that the FTESO provides rapid and accurate estimations of unknown system states and disturbances. This capability significantly enhances the robustness and reliability of the control scheme.
Figure 9 illustrates the control inputs applied by the tracking controller to the actual agents. Initially, the control inputs exhibited stronger actions to promptly eliminate the tracking errors, which gradually reduced in magnitude to stabilize at steady-state values. The smoothness and bounded nature of the control signals demonstrate the stability and robustness of the proposed control strategy, enabling precise trajectory tracking for actual agents.

5. Conclusions

In this paper, we propose a modular prescribed performance control scheme integrating a finite-time extended state observer for robust formation tracking of high-order nonlinear multi-agent systems. Our theoretical analysis and simulations demonstrated that the proposed control method effectively achieved finite-time convergence and rigorously maintained prescribed transient and steady-state performance bounds, along with collision avoidance and connectivity constraints. However, in practical engineering scenarios, there are several challenges that may arise. For instance, real-world uncertainties, such as communication delays, system heterogeneity, and sensor inaccuracies, could affect the performance of the system. Additionally, while the proposed method guarantees finite-time convergence and performance bounds in simulations, scalability issues may arise when extending the framework to larger systems with more agents or in environments with dynamic disturbances. These challenges will need to be addressed in future work to further enhance the reliability and robustness of the approach in real-world applications. Future work will include extending this framework toward a fully distributed implementation, thereby eliminating the requirement for global graph information and enhancing the scalability. Additionally, we plan to address practical considerations, such as system heterogeneity, communication delays, and sensor inaccuracies in real-world applications.

Author Contributions

Writing—original draft preparation, Z.S.; writing—review and editing, C.Z. and W.H.; supervision and project administration, G.Z. All authors have read and agreed to the published version of this manuscript.

Funding

This work was supported by the Postgraduate Research and Practice Innovation Program of Jiangsu Province under Grant KYCX24_1608.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Amirkhani, A.; Barshooi, A.H. Consensus in multi-agent systems: A review. Artif. Intell. Rev. 2022, 55, 3897–3935. [Google Scholar] [CrossRef]
  2. Cai, K.; Qu, T.; Gao, B.; Chen, H. Consensus-based distributed cooperative perception for connected and automated vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8188–8208. [Google Scholar] [CrossRef]
  3. Meng, Z.; Ren, W.; Cao, Y.; You, Z. Leaderless and leader-following consensus with communication and input delays under a directed network topology. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2010, 41, 75–88. [Google Scholar] [CrossRef] [PubMed]
  4. Yu, J.; Yu, S.; Yan, Y. Fixed-time stability of stochastic nonlinear systems and its application into stochastic multi-agent systems. Iet Control Theory Appl. 2021, 15, 126–135. [Google Scholar] [CrossRef]
  5. Chen, F.; Ren, W. On the control of multi-agent systems: A survey. Found. Trends Syst. Control 2019, 6, 339–499. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Wen, G.; Wan, Y.; Fu, J. Consensus tracking control for a class of general linear hybrid multi-agent systems: A model-free approach. Automatica 2023, 156, 111198. [Google Scholar] [CrossRef]
  7. Yang, Y.; Shen, B.; Han, Q.L. Dynamic event-triggered scaled consensus of multi-agent systems in reliable and unreliable networks. IEEE Trans. Syst. Man Cybern. Syst. 2023, 54, 1124–1136. [Google Scholar] [CrossRef]
  8. Lu, M.; Liu, L. Leader-following attitude consensus of multiple rigid spacecraft systems under switching networks. IEEE Trans. Autom. Control 2019, 65, 839–845. [Google Scholar] [CrossRef]
  9. Lu, Y.; Wu, X.; Wang, Y.; Huang, L.; Wei, Q. Quantization-based event-triggered H consensus for discrete-time Markov Jump fractional-order multiagent systems with DoS Attacks. Fractal Fract. 2024, 8, 147. [Google Scholar] [CrossRef]
  10. Wu, K.; Hu, J.; Ding, Z.; Arvin, F. Finite-time fault-tolerant formation control for distributed multi-vehicle networks with bearing measurements. IEEE Trans. Autom. Sci. Eng. 2023, 21, 1346–1357. [Google Scholar] [CrossRef]
  11. de Marina, H.G. Maneuvering and robustness issues in undirected displacement-consensus-based formation control. IEEE Trans. Autom. Control 2020, 66, 3370–3377. [Google Scholar] [CrossRef]
  12. Shojaei, K. Output-feedback formation control of wheeled mobile robots with actuators saturation compensation. Nonlinear Dyn. 2017, 89, 2867–2878. [Google Scholar] [CrossRef]
  13. Choi, Y.H.; Yoo, S.J. Neural-network-based distributed asynchronous event-triggered consensus tracking of a class of uncertain nonlinear multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 2965–2979. [Google Scholar] [CrossRef]
  14. Zhao, W.; Yu, W.; Zhang, H. Observer-based formation tracking control for leader–follower multi-agent systems. IET Control Theory Appl. 2019, 13, 239–247. [Google Scholar] [CrossRef]
  15. Xue, W.; Madonski, R.; Lakomy, K.; Gao, Z.; Huang, Y. Add-on module of active disturbance rejection for set-point tracking of motion control systems. IEEE Trans. Ind. Appl. 2017, 53, 4028–4040. [Google Scholar] [CrossRef]
  16. Hinkkanen, M.; Saarakkala, S.E.; Awan, H.A.A.; Molsa, E.; Tuovinen, T. Observers for sensorless synchronous motor drives: Framework for design and analysis. IEEE Trans. Ind. Appl. 2018, 54, 6090–6100. [Google Scholar] [CrossRef]
  17. Zhang, D.; Deng, C.; Feng, G. Resilient cooperative output regulation for nonlinear multiagent systems under DoS attacks. IEEE Trans. Autom. Control 2022, 68, 2521–2528. [Google Scholar] [CrossRef]
  18. Yu, Z.; Yu, S.; Jiang, H.; Mei, X. Distributed fixed-time optimization for multi-agent systems over a directed network. Nonlinear Dyn. 2021, 103, 775–789. [Google Scholar] [CrossRef]
  19. Tang, Y.; Zhu, K. Optimal consensus for uncertain high-order multi-agent systems by output feedback. Int. J. Robust Nonlinear Control 2022, 32, 2084–2099. [Google Scholar] [CrossRef]
  20. Qiao, Y.; Huang, X.; Yang, B.; Geng, F.; Wang, B.; Hao, M.; Li, S. Formation tracking control for multi-agent systems with collision avoidance and connectivity maintenance. Drones 2022, 6, 419. [Google Scholar] [CrossRef]
  21. Chen, L.; Shi, M.; de Marina, H.G.; Cao, M. Stabilizing and maneuvering angle rigid multiagent formations with double-integrator agent dynamics. IEEE Trans. Control Netw. Syst. 2022, 9, 1362–1374. [Google Scholar] [CrossRef]
  22. Gong, X.; Cui, Y.; Shen, J.; Xiong, J.; Huang, T. Distributed optimization in prescribed-time: Theory and experiment. IEEE Trans. Netw. Sci. Eng. 2021, 9, 564–576. [Google Scholar] [CrossRef]
  23. Dai, S.L.; Lu, K.; Fu, J. Adaptive finite-time tracking control of nonholonomic multirobot formation systems with limited field-of-view sensors. IEEE Trans. Cybern. 2021, 52, 10695–10708. [Google Scholar] [CrossRef] [PubMed]
  24. Lv, J.; Wang, C.; Kao, Y.; Jiang, Y. A fixed-time distributed extended state observer for uncertain second-order nonlinear system. ISA Trans. 2023, 138, 373–383. [Google Scholar] [CrossRef]
  25. Ge, P.; Li, P.; Chen, B.; Teng, F. Fixed-time convergent distributed observer design of linear systems: A kernel-based approach. IEEE Trans. Autom. Control 2022, 68, 4932–4939. [Google Scholar] [CrossRef]
  26. Qin, D.; Wu, J.; Liu, A.; Zhang, W.-A.; Yu, L. Cooperation and coordination transportation for nonholonomic mobile manipulators: A distributed model predictive control approach. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 848–860. [Google Scholar] [CrossRef]
  27. Derakhshannia, M.; Moosapour, S.S. Disturbance observer-based sliding mode control for consensus tracking of chaotic nonlinear multi-agent systems. Math. Comput. Simul. 2022, 194, 610–628. [Google Scholar] [CrossRef]
  28. Wang, X.; Liu, W.; Wu, Q.; Li, S. A modular optimal formation control scheme of multiagent systems with application to multiple mobile robots. IEEE Trans. Ind. Electron. 2021, 69, 9331–9341. [Google Scholar] [CrossRef]
  29. Jackson, B.; Jordan, T. Connected rigidity matroids and unique realizations of graphs. J. Comb. Theory Ser. B 2005, 94, 1–29. [Google Scholar] [CrossRef]
  30. Hendrickson, B. Conditions for unique graph realizations. Siam J. Comput. 1992, 21, 65–84. [Google Scholar] [CrossRef]
  31. Sun, Z.; Mou, S.; Deghat, M.; Anderson, B. Finite time distributed distance-constrained shape stabilization and flocking control for d-dimensional undirected rigid formations. Int. J. Robust Nonlinear Control 2015, 26, 2824–2844. [Google Scholar] [CrossRef]
  32. Mehdifar, F.; Bechlioulis, C.P.; Hashemzadeh, F. Baradarannia M. Prescribed performance distance-based formation control of multi-agent systems. Automatica 2020, 119, 109086. [Google Scholar] [CrossRef]
  33. Chen, Z.; Huang, J. Stabilization and Regulation of Nonlinear Systems; Springer International Publishing: Cham, Switzerland, 2015; pp. 0018–9286. [Google Scholar]
  34. Li, B.; Hu, Q.; Yang, Y. Continuous finite-time extended state observer based fault tolerant control for attitude stabilization. Aerosp. Sci. Technol. 2019, 84, 204–213. [Google Scholar] [CrossRef]
  35. Ning, B.; Han, Q.; Zuo, Z.; Ge, X.; Zhang, X. Collective behaviors of mobile robots beyond the nearest neighbor rules with switching topology. IEEE Trans. Cybern. 2018, 48, 1577–1590. [Google Scholar] [CrossRef]
  36. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inform. 2013, 9, 427–438. [Google Scholar] [CrossRef]
Figure 1. The safety regions and communication radii for two agents.
Figure 1. The safety regions and communication radii for two agents.
Electronics 14 01783 g001
Figure 2. Trajectories of virtual agents.
Figure 2. Trajectories of virtual agents.
Electronics 14 01783 g002
Figure 3. Error envelope of the signal generator.
Figure 3. Error envelope of the signal generator.
Electronics 14 01783 g003
Figure 4. Control input signals of the signal generator.
Figure 4. Control input signals of the signal generator.
Electronics 14 01783 g004
Figure 5. Trajectories of actual agents.
Figure 5. Trajectories of actual agents.
Electronics 14 01783 g005
Figure 6. Tracking error envelope.
Figure 6. Tracking error envelope.
Electronics 14 01783 g006
Figure 7. Velocities of actual agents.
Figure 7. Velocities of actual agents.
Electronics 14 01783 g007
Figure 8. Estimation errors of FTESO.
Figure 8. Estimation errors of FTESO.
Electronics 14 01783 g008
Figure 9. Control input signals of the tracking controller.
Figure 9. Control input signals of the tracking controller.
Electronics 14 01783 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, Z.; Han, W.; Zhang, C.; Zhang, G. A Modular Prescribed Performance Formation Control Scheme of a High-Order Multi-Agent System with a Finite-Time Extended State Observer. Electronics 2025, 14, 1783. https://doi.org/10.3390/electronics14091783

AMA Style

Shi Z, Han W, Zhang C, Zhang G. A Modular Prescribed Performance Formation Control Scheme of a High-Order Multi-Agent System with a Finite-Time Extended State Observer. Electronics. 2025; 14(9):1783. https://doi.org/10.3390/electronics14091783

Chicago/Turabian Style

Shi, Zhihan, Weisong Han, Chen Zhang, and Guangming Zhang. 2025. "A Modular Prescribed Performance Formation Control Scheme of a High-Order Multi-Agent System with a Finite-Time Extended State Observer" Electronics 14, no. 9: 1783. https://doi.org/10.3390/electronics14091783

APA Style

Shi, Z., Han, W., Zhang, C., & Zhang, G. (2025). A Modular Prescribed Performance Formation Control Scheme of a High-Order Multi-Agent System with a Finite-Time Extended State Observer. Electronics, 14(9), 1783. https://doi.org/10.3390/electronics14091783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop