Next Article in Journal
Advanced Sensing Technologies in Structural Health Monitoring and Its Applications
Next Article in Special Issue
From Network Sensors to Intelligent Systems: A Decade-Long Review of Swarm Robotics Technologies
Previous Article in Journal
Optimization and Application of Electromagnetic Ultrasonic Transducer for Battery Non-Destructive Testing
Previous Article in Special Issue
Adaptive Hybrid PSO–APF Algorithm for Advanced Path Planning in Next-Generation Autonomous Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Prescribed Performance Formation Tracking for Unknown Euler–Lagrange Systems Under Input Saturation

by
Athanasios K. Gkesoulis
1,*,
Andreani Christopoulou
2,
Charalampos P. Bechlioulis
1,3 and
George C. Karras
1,2
1
Athena Research Center, Robotics Institute, 15125 Marousi, Greece
2
Department of Informatics and Telecommunications, University of Thessaly, 35100 Lamia, Greece
3
Department of Electrical and Computer Engineering, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6002; https://doi.org/10.3390/s25196002
Submission received: 9 August 2025 / Revised: 18 September 2025 / Accepted: 24 September 2025 / Published: 29 September 2025
(This article belongs to the Special Issue Cooperative Perception and Planning for Swarm Robot Systems)

Abstract

In this paper, we propose a distributed prescribed performance formation tracking control method for unknown Euler–Lagrange systems subject to input amplitude constraints. We address the challenge of maintaining formation tracking within predefined performance bounds when the agents’ inputs are subject to saturation. This is achieved by designing a distributed virtual velocity reference modification mechanism, which modifies the desired velocity reference of each agent whenever saturation occurs. We establish sufficient feasibility conditions for the input constraints that ensure prescribed performance formation tracking of the desired trajectory and guarantee the boundedness of all closed-loop signals. Simulations on a team of underwater vehicles validate the method’s effectiveness.

1. Introduction

The distributed coordination of multi-agent systems (MASs) has attracted extensive attention due to its capacity for the collaborative execution of complex tasks, resilience to individual failures, and adaptability in dynamic environments. These qualities have rendered MASs highly valuable for diverse real–world applications, such as autonomous vehicles, robotic swarms, and sensor networks [1,2]. Among numerous MAS challenges, formation tracking control is a pivotal task that ensures that the agents collectively reach and maintain a predefined formation around a leader agent, using limited, locally available information. Although formation control has been extensively studied in recent decades [3,4,5], substantial issues remain unresolved, especially for nonlinear, uncertain agent dynamics with constraints.
One critical issue is posed by inherent system nonlinearities, prevalent in practical systems due to actuator dynamics, friction, and other phenomena. Traditional linear control strategies typically fail in the aforementioned contexts, necessitating advanced methodologies that effectively compensate for these nonlinearities to ensure robust and stable performance. Several adaptive control studies have been reported to address nonlinear dynamics and uncertainties for multi-agent systems [6,7,8,9,10,11,12]. However, a fundamental issue that remains unaddressed in the aforementioned studies is the presence of input constraints resulting from actuator saturation, safety considerations, and energy limitations. Ignoring input constraints in nonlinear systems can induce significant performance deterioration or even instability, leading to integrator wind-up phenomena [13,14]. This challenge is particularly pronounced in MASs, because if an individual agent’s input saturates, the collective system’s performance is directly influenced, potentially hindering the achievement of global objectives. This issue has attracted research attention in recent years for single [15,16,17,18,19,20] and multi-agent nonlinear systems [21,22,23,24,25,26,27,28,29,30].
Another prevalent challenge in the control of nonlinear single- and multi-agent systems is to impose prescribed performance guarantees on the transient and steady-state behavior of the system, especially in the case where the system dynamics are unknown. Prescribed performance control (PPC) has emerged as a powerful control methodology capable of addressing the challenges posed by unknown nonlinearities and uncertainties. PPC guarantees stability with predefined transient and steady-state error bounds and allows for the specification of error convergence speed, maximum allowable overshoot, and maximum steady-state error even for unknown dynamics [31]. When considered in an MAS framework, PPC methodologies aim to guarantee that agents achieve group goals with predetermined transient behavior and steady-state errors for the multi-agent system despite the presence of uncertainties and nonlinear dynamics [32]. Several studies have extended PPC frameworks to handle input saturation in nonlinear single-agent systems [33,34,35,36]. These approaches, however, frequently employ computationally demanding approximation techniques, such as neural networks or observers, complicating their practical applicability. Recent research has addressed this issue by developing simpler controllers without relying on approximation schemes [37,38,39,40,41]. Specifically, methods involving adaptive adjustments of performance bounds, reference modification schemes, or switching control have proven effective for single-agent scenarios. Nonetheless, extending these solutions to multi-agent systems remains unaddressed.
Several recent studies have addressed the cooperative control problem for Euler–Lagrange multi-agent systems under input saturation. A coordinated formation containment algorithm that embeds a dynamic auxiliary system into each agent is developed in [23], ensuring actuator bounds remain independent of neighbor count while achieving simultaneous leader formation and follower containment for Euler–Lagrange systems with known dynamics. A model-independent formation-tracking scheme is introduced in [24], where formation control with disturbance rejection is achieved, assuming the nonlinear parameters of the Euler–Lagrange model are bounded. In [25], a distributed control design for the formation containment of Euler–Lagrange systems is developed utilizing linear matrix inequalities. A study [26] considered an event-triggered mechanism for Euler–Lagrange systems with known dynamics. More recently, ref. [29] presented an event-triggered adaptive protocol for formation control input, assuming bounded nonlinearities. In [30], a design employing virtual signal generators and sliding mode control is utilized to solve the cooperative control problem. However, most of the aforementioned studies impose restrictive assumptions on system dynamics and none addresses prescribed performance, i.e., guaranteeing that both transient and steady-state formation errors remain within designer-specified bounds in the presence of input saturation. Limited efforts have tackled prescribed performance formation tracking for nonlinear MASs under input constraints. The existing literature focuses on restrictive system classes, neural network approximations, or event-triggered mechanisms [42,43,44,45]. Consequently, a research gap persists regarding generalized, computationally efficient, and robust distributed control strategies that can simultaneously handle nonlinear dynamics and input saturation, as well as guarantee prescribed performance.
Motivated by the aforementioned challenges, the current paper introduces a novel low-complexity distributed PPC strategy for unknown Euler–Lagrange MASs with input constraints under directed communication. Our contributions are summarized as follows:
  • We propose a new robust, approximation-free distributed PPC strategy, where each agent leverages only local distances from its neighbors to achieve formation tracking around a leader trajectory.
  • A distributed virtual velocity reference modification mechanism is developed, enabling each agent to adjust its virtual velocity reference dynamically, in response to input saturation, thus preserving internal stability and feasibility.
  • Analytical lower bounds for the input saturation thresholds are derived, ensuring feasibility of the proposed control law within prescribed performance constraints.
  • A 3D simulation scenario for a group of unmanned underwater vehicles is provided to demonstrate the effectiveness of the distributed formation tracking control design.
Compared to the related work on the cooperative control of nonlinear systems [6,7,8,9,10,11,12], the main contributions of the present study are the adoption of input saturation and the prescribed performance guarantees. Input saturation poses significant difficulty, especially when the considered dynamics are unknown, and it can lead to deteriorated performance and even instability. An extension of the aforementioned studies to include input saturation (even without prescribed system performance) is not straightforward, if possible at all. With respect to the related studies [21,22,23,24,25,26,27,28,29,30] on nonlinear multi-agent system control, the distinctive contribution of our study is the prescribed performance guarantees for the local formation error and the local velocity error of each agent. Also, compared to the aforementioned studies, we develop a low-complexity control design that does not employ complex approximation structures, such as neural networks and fuzzy approximators, thus rendering our approach easier to implement. Compared to the most closely related studies, which consider nonlinear multi-agent systems with input saturation and prescribed performance guarantees [42,43,44,45], the distinctive contribution of the proposed method is that it addresses a larger class of systems than all of those studies. Furthermore, there exist separate contributions compared to each of them individually. In particular, ref. [42] considers a more restrictive communication network topology than our study, being undirected and connected in contrast to directed with a spanning tree. Ref. [43] has more restrictive requirements for the leader’s trajectory, assuming that more information about the leader’s position is transmitted through the communication network, i.e., its first and second derivatives compared to only the position in our study. It also employs neural network approximators. Contrasted with [44], our study assumes relaxed communications, i.e., a directed graph with a spanning tree versus a connected undirected graph. Also, the dynamics of the systems considered in [44] are assumed to be input-to-state stable, an assumption not present in this study. Finally, ref. [45] assumes that the second derivative of the leader’s trajectory is also bounded and employs neural network approximators, contrary to our study.
The remainder of this paper is structured as follows: Section 2 reviews the necessary preliminaries. Section 3 formally presents the considered problem. Section 4 details the control design and includes its stability analysis. Section 5 illustrates the effectiveness of the proposed method through simulations. Section 6 provides concluding remarks and suggests future research directions.

2. Preliminaries

Graph Theory

To model the inter-agent interactions, we consider a directed graph, which is defined as G : = V , E , where V : = { 1 , , N } is the set of agents (nodes) and E V × V is the set of communication links (edges). We denote ( i , j ) E when agent j receives interaction information from agent i. We consider that if ( i , j ) E , then the agent j is aware of its relative position to agent i. The neighborhood of i is defined as N i : = { j | ( j , i ) E } . The Laplacian matrix of G is defined as L : = D A , where A : = [ a i , j ] R N × N is the adjacency matrix, with a i , j > 0 if j N i and a i , j = 0 otherwise, and D : = diag ( d 1 , , d N ) R N × N is the in-degree matrix, with d i : = j = 1 N a i , j . A directed path from i to j is defined as a sequence of successive edges ( i , k ) ( k , l ) ( , j ) . A directed graph contains a spanning tree if there exists a root node from which there exists a directed path to every other node in V .
In this study, we consider that there exists a leader node L. The communication between agents and the leader is defined via the matrix B : = diag ( b 1 , , b N ) R N × N , b i 0 . If b i > 0 , then the agent i is aware of its position’s distance to the leader weighted by b i . We can now define the augmented graph that, in addition to G , captures the interactions of the agents with the leader, as G ¯ : = V ¯ , E ¯ , where V ¯ : = { L , 1 , , N } and E ¯ V ¯ × V ¯ .
We restate here a useful Lemma from [46] that will be used in the text:
Lemma 1. 
Consider an augmented graph G ¯ that contains a spanning tree with the root being the leader L. Then the matrix L + B is non-singular and there exists a diagonal and positive definite matrix P R N × N and a positive constant δ such that P ( L + B ) + ( L + B ) P δ P .

3. Problem Formulation

In this section, we formally define the control problem addressed in this study. We begin by introducing the class of systems under consideration as well as the standing assumptions. We then define the desired prescribed performance tracking and the control objectives.

3.1. System Dynamics and Standing Assumptions

We consider N agents, where each agent i is an uncertain Euler–Lagrange system with dynamics described by the following equations:
η ˙ i = v i + g 1 , i ( t , η i )
M i ( η i ) v ˙ i + C i η i , v i v i + g 2 , i ( t , η i , v i ) = sat ( τ i ) ,
where η i = η i , 1 η i , m T R m is the generalized position vector, v i = v i , 1 v i , m T R m is the generalized velocity, M i ( η i ) R m × m is the inertia matrix, and C i ( η i , v i ) R m × m represents Coriolis, centripetal, and drag effects. The vectors g 1 , i and g 2 , i model unknown but bounded disturbances, unmodeled dynamics, gravitational forces, and other nonlinear effects. The control input vector is denoted as sat ( τ i ) = sat ( τ i , 1 ) sat ( τ i , m ) T R m , where τ i = τ i , 1 τ i , m T R m . The continuous saturation function sat ( · ) : R R is defined as
sat ( τ i , j ) = τ i , j , if | τ i , j | < τ i , j , max τ i , j , max · sgn ( τ i , j ) , otherwise ,
where τ i , j , max > 0 is the saturation level for each input component τ i , j , j = 1 , , m , and sgn ( · ) is the signum function. We make the following assumptions:
Assumption 1. 
All system matrices and vectors are continuous functions with respect to time and are locally Lipschitz with respect to the rest of their arguments.
Assumption 2. 
The functions g j , i , j = 1 , 2 are uniformly bounded with respect to time to incorporate the effect of time-dependent bounded disturbances.
Assumption 3. 
The inertia matrix M i ( η i ) is diagonal and uniformly positive definite for all η i R m .
Remark 1. 
Assumption 3 facilitates the development of an approximation-free control scheme that does not rely on any knowledge or estimation of the system dynamics. In cases where the full inertia matrix is known or can be estimated, the control framework could, in principle, be extended to account for inertial coupling.
Assumption 4. 
The augmented directed communication graph G ¯ contains a spanning tree with the leader node as the root.
Assumption 5. 
The leader’s trajectory η L ( t ) = [ η L , 1 ( t ) η L , m ( t ) ] T : R + R m is continuously differentiable and uniformly bounded, i.e., | η L , j ( t ) | η ¯ L , j , j = 1 , m , on R + , with a bounded, although unknown, derivative with respect to time, i.e., | η ˙ L , j ( t ) | η ˙ ¯ L , j , j = 1 , m , on R + .

3.2. Problem Definition

The objective of this study is to design a distributed control law for each agent that guarantees that the agents’ positions will converge to a predefined formation around the leader’s position while satisfying predefined transient and steady-state performance guarantees. As in the literature of distributed prescribed performance control, the performance requirement is for the formation error of each agent to remain bounded by a prescribed envelope function of the form
ρ i , p = ρ i , p 0 ρ i , p e λ i , p t + ρ i , p ,
where ρ i , p 0 = ρ i , p ( 0 ) , ρ i , p = lim t ( ρ i , p ( t ) ) > 0 and λ i , p > 0 , for all i { 1 , , N } .
In the presence of input saturation such as (3), solving the standard prescribed performance formation control problem would be equivalent to either internal instability or letting the saturation levels become arbitrarily large, defeating the purpose of the input constraints, as stated in [38] for single-agent systems. To overcome this problem, the goal of this study is to design a distributed control mechanism that will allow each agent to modify its local virtual velocity error calculation appropriately whenever the control input becomes saturated to make it feasible under saturation. The goals of this study can be delineated as follows:
1.
Design a distributed mechanism that appropriately modifies the local virtual velocity error of each agent whenever its input becomes saturated.
2.
Design a distributed control protocol that ensures that the modified formation tracking error adheres to the prescribed specifications.
3.
The modification and control mechanism should be continuous and of low computational effort, i.e., no approximation structures, such as neural networks or fuzzy approximators, should be used.
4.
Provide conditions for the saturation level of each agent that make the prescribed performance specifications feasible.

4. Results

In this section, we present our distributed formation tracking strategy for a group of input-constrained Euler–Lagrange agents. First, we introduce the distributed virtual velocity error modification mechanism that enables the rest of the design. The virtual velocity reference modification mechanism activates when individual actuators saturate to appropriately modify the desired virtual velocity reference. This makes the desired virtual velocity reference feasible under saturation. Building on this, we design a continuous, approximation-free distributed control law that guarantees each agent’s formation error evolves within its prescribed performance bounds. We then state and prove the main stability theorem, establishing the boundedness and convergence properties of the closed-loop system. Finally, to illustrate and verify our theoretical developments, we provide high-fidelity simulation results for a group of BlueROV2 underwater vehicles performing formation tracking under input saturation.

4.1. Distributed Virtual Velocity Reference Modification Mechanism

We begin by designing a local virtual velocity reference modification scheme for each agent that remains zero when the agent’s input remains unsaturated and becomes active when the agent’s input saturates. The state of this mechanism will be utilized to alter the desired virtual velocity reference for the agent in order to make it feasible for the controller to achieve when saturation occurs.
For each agent i V , we design a local virtual velocity reference modification signal σ v , i : = σ v , i , 1 σ v , i , m T R m that is given as the state of the dynamics
σ ˙ v , i ( t ) = β i σ v , i ( t ) + Δ τ i ,
where Δ τ i : = τ i sat ( τ i ) , σ v , i ( 0 ) = 0 and β i > 0 are design constants for all i V .

4.2. Control Design

The distributed control law is built upon the velocity error modification signal σ v , i introduced in (4). Specifically, each agent’s desired virtual velocity reference is augmented by σ v , i whenever its actuator saturates, thereby preserving prescribed performance in the presence of input limits. We begin by defining the local formation error for each agent, which captures both neighbor-to-neighbor and leader-to-agent deviations.
For agent i V and generalized coordinate j = 1 , , m , let
η ^ i , j : = η i , j c i , j ,
where c i , j is the desired offset of agent i along coordinate j from the leader’s position in the formation. Then the local formation error for agent i in the j-th coordinate e i , j is defined by
e i , j : = N i a i η ^ i , j η ^ , j + b i η ^ i , j η L , j ,
for all j = 1 , , m and i = 1 , , N . Here, a i are the adjacency weights and b i 0 the leader connection weights introduced in Section 2.
The distributed control design consists of two steps:
  • Step 1: Define the normalized formation error and virtual velocity:
We first normalize the formation errors (6) by the prescribed performance functions. For each agent i = 1 , , N and coordinate j = 1 , , m , define the normalized formation error
ξ p , i , j ( t ) : = e i , j ( t ) ρ p , i , j ( t ) ,
where the prescribed performance function ρ p , i , j ( t ) is any continuously differentiable, strictly decreasing function satisfying
ρ p , i , j ( 0 ) > | e i , j ( 0 ) | , lim t ρ p , i , j ( t ) = ρ p , i , j > 0 .
A typical choice is
ρ p , i , j ( t ) = ρ p , i , j ( 0 ) ρ p , i , j e λ p , i , j t + ρ p , i , j ,
with λ p , i , j > 0 . Next, we assign each agent a virtual velocity reference that will drive ξ p , i , j towards zero. Let
v d , i , j ( t ) : = k p , i , j ρ p , i , j ( t ) 2 1 ξ p , i , j 2 ( t ) T ξ p , i , j ( t ) ,
where k p , i , j > 0 is a design gain and
T ( s ) = ln 1 + s 1 s , s ( 1 , 1 ) .
  • Step 2: Define the normalized velocity error and the control law:
To incorporate actuator saturation effects, we use the modification signal σ v , i , j from (4). For each agent i = 1 , , N and coordinate j = 1 , , m , define the normalized velocity error
ξ v , i , j ( t ) : = v i , j ( t ) v d , i , j ( t ) + σ v , i , j ( t ) ρ v , i , j ( t ) ,
where ρ v , i , j ( t ) is another prescribed performance function for the velocity error that can be chosen analogously to (9) with ρ v , i , j ( 0 ) > | v i , j ( 0 ) v d , i , j ( 0 ) | , and lim t ρ v , i , j ( t ) = ρ v , i , j > 0 . Finally, for each agent i = 1 , , N and coordinate j = 1 , , m , select the control law components as
τ i , j ( t ) : = k v , i , j T ξ v , i , j ( t ) ,
where k v , i , j > 0 is a control gain.
The closed-loop signal flow of the proposed distributed control scheme is summarized in Figure 1, illustrating the signal flow and the interconnection of each component.

4.3. Stability Analysis

We are now ready to state our main stability result.
Theorem 1. 
Consider a group of N agents with dynamics given by (1) and (2) under Assumptions 1–5. The distributed control law (13) together with the modification mechanism (4) ensures that if
τ i , j , max max | τ i , j ( 0 ) | Λ , sup t 0 max ( η i , v i ) S m i , j ( η i ) | f i , j ( t , η i , v i ) | + v ˙ ˜ d , i , j + ρ ˙ v , i , j ,
where Λ 0 , f i ( t , η i , v i ) : = M i 1 C i ( η i , v i ) v i M i 1 g 2 , i ( t , η i , v i ) and S is defined in the proof, then all closed-loop signals remain bounded and prescribed performance formation is achieved, in the sense that | e i , j ( t ) | < ρ p , i , j ( t ) for all t 0 , i = 1 , , N and j = 1 , , m , and
| η ^ i , j η L , j | R p , j ( t ) σ min ( L + B ) ( N 2 + N + 1 ) N 1 N N 1 2 R p , j ( t ) ,
where 1 = [ 1 1 ] T and R p , j ( t ) = diag ρ p , 1 , j ( t ) , , ρ p , N , j ( t ) for all j = 1 , , m .
Proof. 
First, since each M i is diagonal and uniformly positive definite, we can rewrite the dynamics (2) as
η ˙ i = v i + g 1 , i ( t , η i ) ,
v ˙ i = M i 1 ( η i ) C i ( η i , v i ) v i M i 1 ( η i ) g 2 , i ( t , η i , v i ) + M i 1 ( η i ) sat ( τ i ) .
Now, we can rewrite (17) as
v ˙ i = f i ( t , η i , v i ) + M i 1 ( η i ) sat ( τ i ) .
Next, let us recall the definitions of the normalized errors and virtual reference
ξ p , i , j = e i , j ρ p , i , j , v d , i , j = k p , i , j ρ p , i , j 2 1 ξ p , i , j 2 T ( ξ p , i , j ) ,
and set
v ˜ i , j : = v i , j v d , i , j + σ v , i , j .
Then, from (12),
ξ v , i , j = v ˜ i , j ρ v , i , j .
Let us now differentiate ξ p , i , j with respect to time. First, from the definition of the formation error e i , j , one obtains
e ˙ i , j = N i a i v i , j + g 1 , i , j ( t , η i ) v , j g 1 , , j ( t , η ) + b i v i , j + g 1 , i , j ( t , η i ) η ˙ L , j .
Hence, using ξ ˙ p , i , j = e ˙ i , j ρ ˙ p , i , j ξ p , i , j / ρ p , i , j , we have
ξ ˙ p , i , j ( t ) = 1 ρ p , i , j { N i a i v i , j + g 1 , i , j ( t , η i ) v , j g 1 , , j ( t , η ) + b i v i , j + g 1 , i , j ( t , η i ) η ˙ L , j ρ ˙ v , i , j ξ v , i , j }
Differentiating ξ v , i , j and substituting (18) and the derivative σ v , i , j from (4) yields
ξ ˙ v , i , j = 1 ρ v , i , j f i , j ( t , η i , v i ) + 1 m i , j ( η i ) sat ( τ i , j ) v ˙ d , i , j β i σ v , i , j + Δ τ i , j ρ ˙ v , i , j ξ v , i , j .
Consider now the augmented state vector
ξ : = ξ p , 1 T ξ p , N T ξ v , 1 T ξ v , N T σ v , 1 T σ v , N T T R 3 m N ,
where for each i = 1 , , N ,
ξ p , i : = ξ p , i , 1 ξ p , i , m T , ξ v , i : = ξ v , i , 1 ξ v , i , m T .
From (4), (23), and (23), the overall closed-loop dynamics can be written as
ξ ˙ = H t , ξ .
Let
Ω ξ : = ( 1 , 1 ) 2 m N × ( σ ˜ v , i , j , σ ˜ v , i , j ) m N
for some constants σ ˜ v , i , j > 0 to be defined later in the proof. Ω ξ is nonempty and open. By construction of the performance functions and the reference velocity error modification mechanism, the initial condition of the augmented state satisfies
ξ ( 0 ) Ω ξ .
Moreover, H ( t , ξ ) is continuous in t and locally Lipschitz in ξ on Ω ξ . Therefore, according to the existence and uniqueness theorem (see Th. 3.1., pp. 88, ref. [47]), there exists a unique maximal solution
ξ : [ 0 , t max ) Ω ξ , with ξ ( t ) Ω ξ , t [ 0 , t max ) .
This implies the existence of a constant Λ 0 such that
| τ i , j ( 0 ) | Λ + τ i , j , max ,
for all t [ 0 , t max ) , i = 1 , , N and j = 1 , , m . Hence,
T ( ξ v , i , j ( 0 ) ) Λ + τ i , j , max k v , i , j
and
| ξ v , i , j ( 0 ) | T 1 Λ + τ i , j , max k v , i , j = : ξ ˜ v , i , j ,
for all t [ 0 , t max ) , i = 1 , , N and j = 1 , , m . Consequently, for t [ 0 , t max ) , it holds true that
| Δ τ i , j ( t ) | Λ ,
which implies from (4) that
σ v , i , j ( t ) Λ β i = : σ ˜ v , i , j .
Let us now define the column vector
ϵ p , j : = T ( ξ p , 1 , j ) T ( ξ p , 2 , j ) T ( ξ p , N , j ) ,
and consider the function
V p , j : = 1 2 ϵ p , j T P ϵ p , j ,
where P is the diagonal positive-definite matrix from Lemma 1. Its time derivative is
V ˙ p , j = ϵ p , j T P J ( ξ p , j ) ξ ˙ p , 1 , j ξ ˙ p , 2 , j ξ ˙ p , N , j ,
where
J ( ξ p , j ) : = diag J ( ξ p , 1 , j ) , , J ( ξ p , N , j ) , J ( ξ p , i , j ) : = 2 1 ξ p , i , j 2 .
Substituting ξ ˙ p , i , j from (23) and
v i , j = k p , i , j ρ p , i , j J ( ξ p , i , j ) T ( ξ p , i , j ) σ v , i , j + ξ v , i , j ρ v , i , j ,
and after some algebraic manipulations, we obtain
V ˙ p , j = ϵ p , j T J ( ξ p , j ) R p , j 1 ( t ) P [ ( L + B ) col i = 1 N ξ v , i , j ρ v , i , j σ v , i , j η ˙ L , j + g 1 , i , j ( t , η i ) K p , j ( L + B ) R p , j 1 ( t ) J ( ξ p , j ) ϵ p , j col i = 1 N ( ρ ˙ p , i , j ξ p , i , j ) ] ,
where col i = 1 N ( · ) is the column vector containing the indicated scalars, K p , j : = diag ( k p , 1 , j , , k p , N , j ) and R p , j ( t ) : = diag ( ρ p , 1 , j , , ρ p , N , j ) . Let us define
w p , j : = ( L + B ) col i = 1 N ξ v , i , j ρ v , i , j σ v , i , j η ˙ L , j + g 1 , i , j ( t , η i ) col i = 1 N ( ρ ˙ p , i , j ξ p , i , j ) .
Substituting this in (41), we can obtain
V ˙ p , j 1 2 ϵ p , j T J ( ξ p , j ) R p , j 1 ( t ) P K p , j ( L + B ) + ( L + B ) T P K p , j R p , j 1 ( t ) J ( ξ p , j ) ϵ p , j + ϵ p , j T J ( ξ p , j ) R p , j 1 ( t ) P w p , j .
Completing the squares and invoking Lemma 1, we have
V ˙ p , j δ 4 ϵ p , j T J ( ξ p , j ) R p , j 1 ( t ) P K p , j R p , j 1 ( t ) J ( ξ p , j ) ϵ p , j + 1 δ w p , j T K p , j 1 P w p , j .
Since P and K p , j are diagonal positive-definite, we can write P K p , j = K p , j P K p , j , where K p , j = diag ( k p , 1 , j , , k p , N , j ) and therefore
ϵ p , j T J ( ξ p , j ) R p , j 1 ( t ) P K p , j R p , j 1 ( t ) J ( ξ p , j ) ϵ p , j λ min R p , j 1 ( t ) J 1 ( ξ p , j ) K p , j J 1 ( ξ p , j ) R p , j 1 ( t ) ϵ p , j T P ϵ p , j ,
where λ min denotes the minimum eigenvalue. It follows that
V ˙ p , j δ 4 λ min R p , j 1 ( t ) J 1 ( ξ p , j ) K p , j J 1 ( ξ p , j ) R p , j 1 ( t ) V p , j + 1 δ w p , j T P w p , j .
Therefore, we have that V ˙ p , j 0 whenever
V p , j 4 λ min R p , j 1 ( t ) J 1 ( ξ p , j ) K p , j J 1 ( ξ p , j ) R p , j 1 ( t ) δ 2 w p , j T P K p , j J 1 w p , j .
Therefore, there exist constants ϵ ¯ p , j > 0 such that ϵ p , j ϵ ¯ p , j for all t [ 0 , t max ) . From the definition of ϵ p , j in (36), this implies that T ( ξ p , i , j ) remains bounded and, therefore, from (19), the normalized position errors remain bounded as ξ p , i , j ( t ) 1 and the virtual velocity reference and its time derivative remain bounded by some constants v ˜ i , j > 0 and v ˜ d , i , j > 0 , respectively, for all t [ 0 , t max ) . Since the preceding analysis is independent of i and j, the corresponding results are valid for all i = 1 , , N and j = 1 , , m . Equations (6) and (7), the definition of the prescribed performance functions ρ p , i , j and Assumptions 4 and 5, in turn, imply that there exist positive constants η ˜ i , j such that | η i , j | < η ˜ i , j for all t [ 0 , t max ) , i = 1 , , N and j = 1 , , m .
Next, we distinguish two cases as follows: Case 1: If | τ i , j | < τ i , j , max , then T ( ξ v , i , j ( t ) ) < τ i , j , max k v , i , j T ( ξ v , i , j ( 0 ) ) and therefore | ξ v , i , j ( t ) | ξ ˜ v , i , j . Case 2: If | τ i , j | τ i , j , max , let us consider the function
V v , i , j = 1 2 T ( ξ v , i , j ) 2
Its time derivative is
V ˙ v , i , j = 2 T ( ξ v , i , j ) ( 1 ξ v , i , j 2 ) ρ v , i , j ( t ) [ f i , j ( t , η i , v i ) + 1 m i , j ( η i ) sat ( τ i , j ) v ˙ d , i , j β i σ v , i , j + Δ τ i , j ρ ˙ v , i , j ξ v , i , j ] .
It is valid that sat ( τ i , j ) = sgn T ( ξ v , i , j ) τ i , j , max and also that in the limit case that T ( ξ v , i , j ) reaches its initial bound, i.e., | T ( ξ v , i , j ) | = Λ + τ i , j , max k v , i , j ; it stands that Δ τ i , j = sgn T ( ξ v , i , j ) Λ . Therefore, utilizing these, invoking (35) and | ξ v , i , j , ( t ) | < 1 from the existence and uniqueness theorem for all t [ 0 , t max ) , we can obtain
V ˙ v , i , j 2 | T ( ξ v , i , j ) | ( 1 ξ v , i , j 2 ) ρ v , i , j ( t ) | f i , j ( t , η i , v i ) | 1 m i , j ( η i ) τ i , j , max + v ˙ ˜ d , i , j + ρ ˙ v , i , j ,
where v ˙ ˜ d , i , j > 0 denotes the bound of the derivative of the virtual velocity reference. As a result, if
τ i , j , max max | τ i , j ( 0 ) | Λ , sup t 0 max ( η i , v i ) S m i , j ( η i ) | f i , j ( t , η i , v i ) | + v ˙ ˜ d , i , j + ρ ˙ v , i , j ,
where S = { ( η i , v i ) R 2 m | η i , j | η ˜ i , j , | v i , j | ρ v , i , j ( 0 ) + Λ β i + v ˜ d , i , j , j = 1 , , m } , then V ˙ v , i , j 0 and consequently T ( ξ v , i , j ( t ) ) will remain within their initial bounds T ( ξ v , i , j ( t ) ) < Λ + τ i , j , max k v , i , j . Thus, we also have | ξ v , i , j ( t ) | T 1 Λ + τ i , j , max k v , i , j = ξ ˜ v , i , j < 1 for all t [ 0 , t max ) . By virtue of (33), it holds that ξ v , i , j ( 0 ) [ ξ ˜ v , i , j , ξ ˜ v , i , j ] ( 1 , 1 ) . Since τ i , j is bounded, we can deduce that | Δ τ i , j ( t ) | = | τ i , j ( t ) sat ( τ i , j ( t ) ) | sup t 0 | Δ τ i , j ( t ) | and as a result, | σ v , i , j ( t ) | sup t 0 | Δ τ i , j ( t ) | β i ( 1 e β i t ) for all t [ 0 , t max ) . Invoking Proposition 1 in [48], if t max < , then there would exist t [ 0 , t max ) such that ξ v , i , j ( t ) [ ξ ˜ v , i , j , ξ ˜ v , i , j ] , which is a contradiction. As a result, t max = . The preceding analysis is independent of i and j and therefore the corresponding results are valid for all i = 1 , , N and j = 1 , , m . This implies that all closed-loop signals remain bounded for all t 0 .
From (7), we obtain | e i , j |   < ρ p , i , j ( t ) for all t 0 , i = 1 , , N and j = 1 , , m . Invoking that ( L + B ) is invertible from Assumption A4, we conclude that prescribed performance formation is achieved in the sense that
( L + B ) η ^ j η L , j 1     R p , j ( t ) ,
where η ^ j = [ η ^ 1 , j η ^ N , j ] T and therefore invoking Remark 2 in [32], we obtain
| η ^ i , j η L , j |     η ^ j η L , j 1     R p , j ( t ) σ min ( L + B ) ( N 2 + N + 1 ) N 1 N N 1 2 R p , j ( t ) ,
for all t 0 , i = 1 , , N and j = 1 , , m , where σ min ( L + B ) denotes the smallest singular value of ( L + B ) . □
Remark 2. 
The parameter Λ in Theorem 1 is an upper bound on the difference between the requested control effort and the actual control effort that might arise in the closed-loop system.
Remark 3. 
The proposed adaptive PPC algorithm operates without any knowledge of the plant nonlinearities and without using any universal approximator such as neural networks or fuzzy systems. Both the control input and the virtual velocity error modification are obtained in a straightforward manner, keeping computational overhead low. In addition, the method does not require the time derivative of the leader’s signal η L ( t ) , which makes it suitable for applications where the desired trajectory is measured online and its closed form is not available.

5. Simulation

This section provides simulation results to verify the effectiveness of the proposed control scheme.

5.1. Simulation Scenario

To demonstrate the effectiveness and robustness of the proposed distributed control framework, we present simulation results for a group of five BlueROV2 autonomous underwater vehicles (AUVs) operating in a six-DOF leader–follower formation, where the leader follows a sinusoidal trajectory. Each agent is modeled by full rigid-body dynamics with hydrodynamic effects using state vectors
η = [ x , y , z , ϕ , θ , ψ ] T , v = [ u , v , w , p , q , r ] T
and control inputs
τ = [ X , Y , Z , K , M , N ] T .
The inertia matrix is defined as
M = diag ( 17 , 24.2 , 26.07 , 0.28 , 0.28 , 0.28 ) ,
and the Coriolis, centripetal, and hydrodynamic drag forces matrix is modeled as
C = 4.03 18.18 | u | 0 0 0 6.22 21.66 | v | 0 0 0 5.18 36.99 | w | 0 3.07 w 1.2 v 3.07 w 0 6 u 1.2 v 6 u 0 0 3.07 w 1.2 v 3.07 w 0 6 u 1.2 v 6 u 0 0.07 1.55 | p | 0.28 r 0.04 q 0.28 r 0.07 1.55 | q | 0.04 p 0.04 q 0.04 p 0.07 1.55 | r | .
The gravitational and buoyancy effects are encoded using the restoring force vector
g 2 = [ 2 sin ( θ ) , 2 cos ( θ ) sin ( ϕ ) , 2 cos ( θ ) cos ( ϕ ) , 0 , 0 , 0 ] T .
The desired position offsets of the five AUVs relative to the leader’s trajectory along each coordinate x, y, z, ψ , and ϕ are given by the following matrix
F = 3 3 0 0 0 2 2 0 0 0 1 1 0 0 0 1 1 0 0 0 2.5 2.5 0 0 0 ,
where c i , j : = F i j , as defined in Equation (5), for j equal to x, y, z, ψ and ϕ . We note that since the BlueROV2 AUVs are not actuated in the pitch dimension ( θ ), we do not present a control design and results for this dimension.
  • The prescribed performance functions were selected for all agents i = 1 , , 5 as
  • ρ p , i , j ( t ) = ( ρ p , i , j 0 ρ p , i , j ) e λ p , i , j t + ρ p , i , j , j = x , y , z , ϕ , ψ ;
  • ρ v , i , j ( t ) = ( ρ v , i , j 0 ρ v , i , j ) e λ v , i , j t + ρ v , i , j , j = u , v , w , p , r ;
  • with performance parameters
  • ρ p , i , j 0 = 30 , ρ p , i , j = 0.001 , λ p , i , j = 0.01 j = x , y , z , ϕ , ψ ;
  • ρ v , i , j 0 = 30 , ρ v , i , j = 0.5 , λ v , i , j = 0.05 j = u , v , w , p , r .
  • Control gains were chosen as
  • k p , i , j = 50 , j = x , y , z , ϕ , ψ ;
  • k v , i , j = 1 , j = u , v , w , p , r .
  • The virtual velocity reference modification gains were selected as β i = 0.5 for all i = 1 , , 5 .
  • The control inputs were constrained by saturation limits
τ x , max = 5 , τ y , max = 5 , τ z , max = 5 , τ ϕ , max = 0.5 , τ ψ , max = 0.5 .
The desired leader trajectory was designed as
η L , x ( t ) = 10 sin 0.01 t ,
η L , y ( t ) = 10 sin 0.02 t ,
η L , z ( t ) = 5 sin 0.03 t ,
η L , ϕ ( t ) = π 6 sin 0.004 t ,
η L , ψ ( t ) = π 2 sin 0.004 t .
The multi-AUV system is configured in a topology with a spanning tree with the leader as the root. The communication topology of the group of AUVs is depicted in Figure 2. We can observe that only AUV 3 is aware of its relative position to the leader.

5.2. Simulation Results

As shown in Figure 3, the simulation begins with all five BlueROV2 vehicles initialized at
η 1 ( 0 ) = [ 7 , 7 , 10 , π 4 , π 4 ] T , η 2 ( 0 ) = [ 5 , 5 , 8 , π 5 , π 5 ] T , η 3 ( 0 ) = [ 3 , 3 , 6 , π 6 , π 6 ] T , η 4 ( 0 ) = [ 3 , 3 , 4 , π 6 , π 6 ] T , η 5 ( 0 ) = [ 5 , 5 , 2 , π 5 , π 5 ] T ,
where each state vector η i ( 0 ) = [ x i ( 0 ) , y i ( 0 ) , z i ( 0 ) , ϕ i ( 0 ) , ψ i ( 0 ) ] T corresponds to the initial position and orientation of AUV i. We can observe that all follower agents’ positions converge to the predefined positions around the leader’s trajectory, demonstrating formation tracking.
The simulation results are summarized in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Figure 4 illustrates the trajectories of the leader and all five followers in the x, y, z, ϕ , and ψ coordinates. The leader follows the predefined sinusoidal trajectory, while the followers converge smoothly to their respective formation offsets, confirming successful prescribed performance formation tracking.
Figure 5 shows the evolution of the formation errors for all five followers in the x, y, z, ϕ , and ψ directions. The red dashed curves represent the scaled prescribed performance formation bounds, depicted as ± κ N ρ p , with κ N : = N σ min ( L + B ) = 5.9934 , as dictated by the results of Theorem 1. As observed, all formation errors remain strictly within the bounds for the entire simulation horizon, demonstrating the guaranteed transient and steady-state performance.
Figure 6 presents the velocity errors for the surge u, sway v, heave w, roll p, and yaw r components. Again, the red dashed curves correspond to the prescribed performance bounds for the modified velocities, depicted as ± ρ v . The errors remain within these envelopes, showing that the control law achieves stable and bounded modified velocity tracking.
The control inputs are shown in Figure 7. The requested efforts are contrasted against the actual applied efforts, which remain within the saturation limits. The ability of the proposed controller to handle hard input constraints without the loss of formation tracking is confirmed.
Finally, Figure 8 shows the virtual velocity modification signals σ v , i , j for all followers i = 1 , , 5 in the x, y, z, ϕ , and ψ dimensions. These signals are activated whenever requested inputs exceed the saturation limits, smoothly modifying the virtual velocity reference and thereby preserving stability and performance. Together, these results confirm that the proposed distributed control scheme achieves robust prescribed performance formation tracking under realistic six-DOF underwater dynamics, limited communication, and input saturation.

6. Conclusions

This paper presented a novel distributed prescribed performance control framework for the formation tracking of uncertain Euler–Lagrange multi-agent systems under input saturation. The approach introduces a distributed virtual velocity reference modification mechanism that dynamically adjusts each agent’s reference in the presence of actuator saturation, ensuring internal stability and preserving prescribed performance bounds. A rigorous analysis established the boundedness of all closed-loop signals and derived explicit feasibility conditions on control limits. The method is continuous, approximation-free, and requires only local neighbor and leader information, making it computationally efficient and suitable for real-time implementation in resource-constrained robotic systems.
High-fidelity six-DOF simulations with multiple BlueROV2 underwater vehicles demonstrated that the proposed controller achieves precise formation tracking while respecting input constraints, even with partial leader visibility and nonlinear hydrodynamic effects.
Future studies will focus on experimental validation in real underwater environments and an extension to time-varying communication topologies.

Author Contributions

Conceptualization, A.K.G.; methodology, A.K.G.; software, A.K.G. and A.C.; validation, A.K.G. and A.C.; formal analysis, A.K.G.; investigation, A.K.G.; writing—original draft preparation, A.K.G. and A.C.; writing—review and editing, A.K.G. and A.C.; visualization, A.K.G. and A.C.; supervision, C.P.B. and G.C.K.; project administration, C.P.B. and G.C.K.; funding acquisition, C.P.B. and G.C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the project “Applied Research for Autonomous Robotic Systems” (MIS 5200632), which is implemented within the framework of the National Recovery and Resilience Plan “Greece 2.0” (Measure: 16618-Basic and Applied Research) and is funded by the European Union—NextGenerationEU.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
  2. Ren, W.; Beard, R.W. Distributed Consensus in Multi-Vehicle Cooperative Control: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  3. Jadbabaie, A.; Lin, J.; Morse, A.S. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Control 2003, 48, 988–1001. [Google Scholar] [CrossRef]
  4. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef]
  5. Lewis, F.L.; Zhang, H.; Hengster-Movric, K.; Das, A. Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Feng, Z.; Sun, C.; Hu, G. Robust connectivity preserving rendezvous of multirobot systems under unknown dynamics and disturbances. IEEE Trans. Control Netw. Syst. 2016, 4, 725–735. [Google Scholar] [CrossRef]
  7. Feng, Z.; Hu, G.; Ren, W.; Dixon, W.E.; Mei, J. Distributed coordination of multiple unknown Euler-Lagrange systems. IEEE Trans. Control Netw. Syst. 2016, 5, 55–66. [Google Scholar] [CrossRef]
  8. Jin, X.; Wang, S.; Qin, J.; Zheng, W.X.; Kang, Y. Adaptive fault-tolerant consensus for a class of uncertain nonlinear second-order multi-agent systems with circuit implementation. IEEE Trans. Circuits Syst. I Regul. Pap. 2017, 65, 2243–2255. [Google Scholar] [CrossRef]
  9. Lu, M.; Liu, L. Leader–following consensus of multiple uncertain Euler–Lagrange systems with unknown dynamic leader. IEEE Trans. Autom. Control 2019, 64, 4167–4173. [Google Scholar] [CrossRef]
  10. Baldi, S.; Frasca, P. Adaptive synchronization of unknown heterogeneous agents: An adaptive virtual model reference approach. J. Frankl. Inst. 2019, 356, 935–955. [Google Scholar] [CrossRef]
  11. Jin, X.; Lü, S.; Yu, J. Adaptive NN-based consensus for a class of nonlinear multiagent systems with actuator faults and faulty networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3474–3486. [Google Scholar] [CrossRef]
  12. Deng, C.; Jin, X.Z.; Che, W.W.; Wang, H. Learning-based distributed resilient fault-tolerant control method for heterogeneous MASs under unknown leader dynamic. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 5504–5513. [Google Scholar]
  13. Astrom, K.J.; Rundqwist, L. Integrator windup and how to avoid it. In Proceedings of the 1989 American Control Conference, Pittsburgh, PA, USA, 21–23 June 1989; pp. 1693–1698. [Google Scholar]
  14. Morabito, F.; Teel, A.R.; Zaccarian, L. Nonlinear antiwindup applied to Euler-Lagrange systems. IEEE Trans. Robot. Autom. 2004, 20, 526–537. [Google Scholar] [CrossRef]
  15. Guo, Y.; Huang, B.; Li, A.; Wang, C. Integral sliding mode control for Euler-Lagrange systems with input saturation. Int. J. Robust Nonlinear Control 2019, 29, 1088–1100. [Google Scholar] [CrossRef]
  16. Chen, C.; Zhu, G.; Zhang, Q.; Zhang, J. Robust adaptive finite-time tracking control for uncertain euler-Lagrange systems with input saturation. IEEE Access 2020, 8, 187605–187614. [Google Scholar] [CrossRef]
  17. Wang, C.; Kuang, T. Neuroadaptive control for uncertain Euler-Lagrange systems with input and output constraints. IEEE Access 2021, 9, 51940–51949. [Google Scholar] [CrossRef]
  18. Shao, K.; Tang, R.; Xu, F.; Wang, X.; Zheng, J. Adaptive sliding mode control for uncertain Euler–Lagrange systems with input saturation. J. Frankl. Inst. 2021, 358, 8356–8376. [Google Scholar] [CrossRef]
  19. Hu, J.; Trenn, S.; Zhu, X. Funnel control for relative degree one nonlinear systems with input saturation. In Proceedings of the 2022 European Control Conference (ECC), London, UK, 12–15 July 2022; pp. 227–232. [Google Scholar]
  20. Berger, T. Input-constrained funnel control of nonlinear systems. IEEE Trans. Autom. Control 2024, 69, 5368–5382. [Google Scholar] [CrossRef]
  21. Wen, C.; Zhou, J.; Liu, Z.; Su, H. Robust adaptive control of uncertain nonlinear systems in the presence of input saturation and external disturbance. IEEE Trans. Autom. Control 2011, 56, 1672–1678. [Google Scholar] [CrossRef]
  22. Chen, M.; Tao, G.; Jiang, B. Dynamic surface control using neural networks for a class of uncertain nonlinear systems with input saturation. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 2086–2097. [Google Scholar] [CrossRef]
  23. Li, C.; Chen, L.; Guo, Y.; Ma, G. Formation–containment control for networked Euler–Lagrange systems with input saturation. Nonlinear Dyn. 2018, 91, 1307–1320. [Google Scholar] [CrossRef]
  24. Wang, L.; He, H.; Zeng, Z.; Ge, M.F. Model-independent formation tracking of multiple Euler–Lagrange systems via bounded inputs. IEEE Trans. Cybern. 2019, 51, 2813–2823. [Google Scholar] [CrossRef]
  25. Silva, T.; Souza, F.; Pimenta, L. Distributed formation-containment control with Euler-Lagrange systems subject to input saturation and communication delays. Int. J. Robust Nonlinear Control 2020, 30, 2999–3022. [Google Scholar] [CrossRef]
  26. Zhao, J.; Wang, Y.; Wang, Q. Event-triggered formation-containment control for multiple Euler-Lagrange systems with input saturation. J. Chin. Inst. Eng. 2022, 45, 313–323. [Google Scholar] [CrossRef]
  27. Ma, J.; Zhang, H.; Zhang, J.; Guo, X. Predefined-time control for multi-agent systems with input saturation: An improved dynamic surface control scheme. IEEE Trans. Autom. Sci. Eng. 2024, 22, 3661–3670. [Google Scholar] [CrossRef]
  28. Yuan, F.; Liu, Y.J.; Liu, L.; Lan, J. Adaptive neural network control of non-affine multi-agent systems with actuator fault and input saturation. Int. J. Robust Nonlinear Control 2024, 34, 3761–3780. [Google Scholar]
  29. Zhang, Y.; Gao, C.; Ma, L. Event-Triggered Formation Containment Control for Euler-Lagrange Systems with Input Saturation. In Proceedings of the 2024 2nd International Conference on Frontiers of Intelligent Manufacturing and Automation, Baotou, China, 9–11 August 2024; pp. 588–592. [Google Scholar]
  30. Chen, C.; Yin, S.; Zou, W.; Xiang, Z. Connectivity-Preserving Consensus of Heterogeneous Multiple Euler–Lagrange Systems with Input Saturation. IEEE Trans. Ind. Inform. 2025. (Early Access). [Google Scholar] [CrossRef]
  31. Bechlioulis, C.P.; Rovithakis, G.A. Robust Adaptive Control of Feedback Linearizable MIMO Nonlinear Systems with Prescribed Performance. IEEE Trans. Autom. Control 2008, 53, 2090–2099. [Google Scholar] [CrossRef]
  32. Bechlioulis, C.P.; Rovithakis, G.A. Decentralized Robust Synchronization of Unknown High Order Nonlinear Multi-Agent Systems With Prescribed Transient and Steady State Performance. IEEE Trans. Autom. Control 2017, 62, 123–134. [Google Scholar] [CrossRef]
  33. Li, S.; Xiang, Z. Adaptive prescribed performance control for switched nonlinear systems with input saturation. Int. J. Syst. Sci. 2018, 49, 113–123. [Google Scholar] [CrossRef]
  34. Cheng, C.; Zhang, Y.; Liu, S. Neural observer-based adaptive prescribed performance control for uncertain nonlinear systems with input saturation. Neurocomputing 2019, 370, 94–103. [Google Scholar] [CrossRef]
  35. Yong, K.; Chen, M.; Shi, Y.; Wu, Q. Flexible performance-based robust control for a class of nonlinear systems with input saturation. Automatica 2020, 122, 109268. [Google Scholar] [CrossRef]
  36. Ji, R.; Li, D.; Ma, J.; Ge, S.S. Saturation-tolerant prescribed control of MIMO systems with unknown control directions. IEEE Trans. Fuzzy Syst. 2022, 30, 5116–5127. [Google Scholar] [CrossRef]
  37. Wang, Y.; Hu, J.; Li, J.; Liu, B. Improved prescribed performance control for nonaffine pure-feedback systems with input saturation. Int. J. Robust Nonlinear Control 2019, 29, 1769–1788. [Google Scholar] [CrossRef]
  38. Fotiadis, F.; Rovithakis, G.A. Input-Constrained Prescribed Performance Control for High-Order MIMO Uncertain Nonlinear Systems via Reference Modification. IEEE Trans. Autom. Control 2024, 69, 3301–3308. [Google Scholar] [CrossRef]
  39. Bikas, L.N.; Rovithakis, G.A. Prescribed performance under input saturation for uncertain strict-feedback systems: A switching control approach. Automatica 2024, 165, 111663. [Google Scholar] [CrossRef]
  40. Trakas, P.S.; Bechlioulis, C.P. Adaptive Performance Control for Input Constrained MIMO Nonlinear Systems. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 7733–7745. [Google Scholar] [CrossRef]
  41. Gkesoulis, A.K.; Bechlioulis, C.P. A Low-Complexity Adaptive Performance Control Scheme for Unknown Nonlinear Systems Subject to Input Saturation. IEEE Control Syst. Lett. 2025, 9, 2115–2120. [Google Scholar] [CrossRef]
  42. Yang, H.; Jiang, B.; Yang, H.; Liu, H.H. Synchronization of multiple 3-DOF helicopters under actuator faults and saturations with prescribed performance. ISA Trans. 2018, 75, 118–126. [Google Scholar] [CrossRef]
  43. Shi, Z.; Zhou, C.; Guo, J.; Su, B. Event-triggered prescribed performance control for nonlinear multi-agent systems with input saturation. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 5050–5055. [Google Scholar]
  44. Trakas, P.S.; Tantoulas, A.; Bechlioulis, C.P. Formation Control of Nonlinear Multi-Agent Systems with Nested Input Saturation. Appl. Sci. 2023, 14, 213. [Google Scholar] [CrossRef]
  45. Chang, R.; Liu, Y.; Chi, X.; Sun, C. Event-based adaptive formation and tracking control with predetermined performance for nonlinear multi-agent systems. Neurocomputing 2025, 611, 128660. [Google Scholar] [CrossRef]
  46. Gkesoulis, A.K.; Psillakis, H.E. Distributed UAV formation control with prescribed performance. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 439–445. [Google Scholar]
  47. Khalil, H.K.; Grizzle, J.W. Nonlinear Systems; Prentice Hall: Upper Saddle River, NJ, USA, 2002; Volume 3. [Google Scholar]
  48. Bechlioulis, C.P.; Rovithakis, G.A. A low-complexity global approximation-free control scheme with prescribed performance for unknown pure feedback systems. Automatica 2014, 50, 1217–1226. [Google Scholar]
Figure 1. Closed-loop block diagram of the distributed prescribed performance control scheme for agent i.
Figure 1. Closed-loop block diagram of the distributed prescribed performance control scheme for agent i.
Sensors 25 06002 g001
Figure 2. The communication graph topology.
Figure 2. The communication graph topology.
Sensors 25 06002 g002
Figure 3. Trajectories of the BlueROV2 agents converging to the predefined positions around the leader’s trajectory.
Figure 3. Trajectories of the BlueROV2 agents converging to the predefined positions around the leader’s trajectory.
Sensors 25 06002 g003
Figure 4. Leader and follower position trajectories for x , y , z , ϕ , and ψ .
Figure 4. Leader and follower position trajectories for x , y , z , ϕ , and ψ .
Sensors 25 06002 g004aSensors 25 06002 g004b
Figure 5. Formation errors for x , y , z , ϕ , and ψ and the scaled prescribed performance functions ± 5.9934 ρ p .
Figure 5. Formation errors for x , y , z , ϕ , and ψ and the scaled prescribed performance functions ± 5.9934 ρ p .
Sensors 25 06002 g005aSensors 25 06002 g005b
Figure 6. Velocity errors for u , v , w , p , and r and the prescribed performance functions.
Figure 6. Velocity errors for u , v , w , p , and r and the prescribed performance functions.
Sensors 25 06002 g006
Figure 7. Control efforts: requested, actual, and their saturation levels for x , y , z , ϕ , and ψ .
Figure 7. Control efforts: requested, actual, and their saturation levels for x , y , z , ϕ , and ψ .
Sensors 25 06002 g007
Figure 8. Virtual velocity reference modification signals for x , y , z , ϕ , and ψ .
Figure 8. Virtual velocity reference modification signals for x , y , z , ϕ , and ψ .
Sensors 25 06002 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gkesoulis, A.K.; Christopoulou, A.; Bechlioulis, C.P.; Karras, G.C. Distributed Prescribed Performance Formation Tracking for Unknown Euler–Lagrange Systems Under Input Saturation. Sensors 2025, 25, 6002. https://doi.org/10.3390/s25196002

AMA Style

Gkesoulis AK, Christopoulou A, Bechlioulis CP, Karras GC. Distributed Prescribed Performance Formation Tracking for Unknown Euler–Lagrange Systems Under Input Saturation. Sensors. 2025; 25(19):6002. https://doi.org/10.3390/s25196002

Chicago/Turabian Style

Gkesoulis, Athanasios K., Andreani Christopoulou, Charalampos P. Bechlioulis, and George C. Karras. 2025. "Distributed Prescribed Performance Formation Tracking for Unknown Euler–Lagrange Systems Under Input Saturation" Sensors 25, no. 19: 6002. https://doi.org/10.3390/s25196002

APA Style

Gkesoulis, A. K., Christopoulou, A., Bechlioulis, C. P., & Karras, G. C. (2025). Distributed Prescribed Performance Formation Tracking for Unknown Euler–Lagrange Systems Under Input Saturation. Sensors, 25(19), 6002. https://doi.org/10.3390/s25196002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop