Next Article in Journal
UAV Path Planning Trends from 2000 to 2024: A Bibliometric Analysis and Visualization
Next Article in Special Issue
Seismic Data Acquisition Utilizing a Group of UAVs
Previous Article in Journal
A Novel Approach for Maize Straw Type Recognition Based on UAV Imagery Integrating Height, Shape, and Spectral Information
Previous Article in Special Issue
Research on Data Link Channel Decoding Optimization Scheme for Drone Power Inspection Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations

by
Salvatore Rosario Bassolillo
1,†,
Egidio D’Amato
1,*,† and
Immacolata Notaro
2,†
1
Department of Science and Technology, Universitá degli Studi di Napoli “Parthenope”, Centro Direzionale Isola C4, 80143 Napoli, Italy
2
Department of Engineering, Università degli Studi della Campania “L.Vanvitelli”, Via Roma, 29, 81031 Aversa, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Drones 2025, 9(2), 127; https://doi.org/10.3390/drones9020127
Submission received: 31 December 2024 / Revised: 5 February 2025 / Accepted: 7 February 2025 / Published: 9 February 2025
(This article belongs to the Special Issue Resilient Networking and Task Allocation for Drone Swarms)

Abstract

:
In the last decades, the increasing employment of unmanned aerial vehicles (UAVs) in civil applications has highlighted the potential of coordinated multi-aircraft missions. Such an approach offers advantages in terms of cost-effectiveness, operational flexibility, and mission success rates, particularly in complex scenarios such as search and rescue operations, environmental monitoring, and surveillance. However, achieving global situational awareness, although essential, represents a significant challenge, due to computational and communication constraints. This paper proposes a Distributed Moving Horizon Estimation (DMHE) technique that integrates consensus theory and Moving Horizon Estimation to optimize computational efficiency, minimize communication requirements, and enhance system robustness. The proposed DMHE framework is applied to a formation of UAVs performing target detection and tracking in challenging environments. It provides a fully distributed architecture that enables UAVs to estimate the position and velocity of other fleet members while simultaneously detecting static and dynamic targets. The effectiveness of the technique is proved by several numerical simulation, including an in-depth sensitivity analysis of key algorithm parameters, such as fleet network topology and consensus iterations and the evaluation of the robustness against node faults and information losses.

1. Introduction

Unmanned aerial vehicles (UAVs) are widely recognized as cost-effective and versatile mobile sensing platforms capable of autonomously performing tasks that are challenging, hazardous, or monotonous for human operators. Applications span a wide range of domains such as agricultural monitoring, exploration and mapping, and search and rescue missions, as well as surveillance and tracking [1]. The increasing complexity of missions has highlighted the use of coordinated UAV fleets as a promising approach to achieve greater operational efficiency and resilience [2].
A wide range of applications that involve UAV fleets has been documented in the literature, in fields such as surveillance, mapping, and distributed sensing [3,4,5,6]. However, situational awareness is a substantial challenge that must be addressed to allow effective coordination of the UAV fleet [7]. UAV swarms can be conceptualized as large-scale multi-sensor systems, similar to grids used for data fusion in sensor networks [8]. In this field, an interesting task is the accurate localization of targets, which relies heavily on the effectiveness of the data fusion process and the relative positioning of the UAVs with respect to the target [9,10].
The problem of localizing moving targets has been studied in signal processing and control literature [11,12,13,14], driven by applications across civilian, military, and transportation sectors. The primary goal of target tracking is to estimate the trajectory of a mobile object, which can be modeled using various dynamic frameworks.
In this field, several studies focus on a multi-lateration approach using distance measurements from ToF (time of flight) or ToA (time of arrival) sensors, where UAVs, acting as mobile agents, share their positions and ToA data to estimate the target location. The literature also addresses multi-target tracking using static sensors [15,16] or mobile agents [17] under various filtering approaches. Recent advancements in UAV technology and multi-drone coordination [18,19,20] have encouraged industrial adoption of mobile tracking methodologies.
However, traditional estimation techniques are predominantly centralized [21,22,23,24], relying on a fusion center to aggregate data from all sensors and apply algorithms such as Kalman filtering (KF), extended Kalman filtering (EKF), or maximum likelihood estimation (MLE) [23,25]. In these techniques, as the fleet size increases, communication and computational bottlenecks, as well as increased software complexity, can be expected. Furthermore, these limitations are compounded by the risk of single-point failure in the leader node, which can compromise the entire system operation.
The development of cloud-based and distributed computing frameworks has involved interest in decentralized estimation approaches [26,27]. Decentralized architectures offer a promising alternative by distributing computational and communication burdens across all nodes in the fleet. In these systems, each node iteratively refines its local estimation, incorporating data from neighboring nodes, in order to collaboratively track the target state [28,29]. This approach relies on determining efficient protocols for data exchange to achieve agreement on the target measurements, a challenge known as the consensus or synchronization problem [30,31,32,33].
One critical challenge in decentralized estimation lies in ensuring observability. To achieve stable estimation with bounded error, specific observability conditions must be satisfied. Semi-centralized approaches [34,35,36,37] address it by ensuring local observability within each agent neighborhood, and requiring densely connected networks where each UAV communicates with at least three neighbors [38]. However, several studies have proposed distributed estimation protocols [39,40] that relax these requirements. These protocols eliminate the need for local observability, reducing the communication burden and enabling operation under less restrictive network connectivity.
Key considerations for decentralized methods include scalability and robustness to dynamic changes in network topology [41]. Scalability ensures that computational complexity grows modestly with the size of the network, while robustness enables the system to adapt to changes in relative positions of UAVs or environmental factors, preserving the integrity of the fleet even as formation evolves [42].
Decentralized Kalman filters (DKFs) represent a key innovation in distributed estimation, with numerous implementations documented in the literature [43,44,45,46].
When constraints on state variables are present or the noise deviates from being white and Gaussian, Kalman filtering techniques may become suboptimal and, in some cases, unstable. Moving Horizon Estimation (MHE) has been studied as a promising solution, presenting some conceptual similarities to model predictive control (MPC), as both methods rely on optimization problems over a moving time horizon [47,48]. Compared to traditional estimation methods, MHE is capable of handling constraints and providing more accurate estimates in complex scenarios [49]. Several MHE schemes have been developed in recent years to address estimation problems in linear, nonlinear, and hybrid systems [50,51,52,53].
Consequently, a distributed MHE (DMHE) framework can offer several advantages: it allows constraints to be directly incorporated into the optimization problem that is solved at each time step, it ensures optimal estimation, and it guarantees convergence within a deterministic framework under weak local observability conditions [54].
Several distributed implementations of MHE have been proposed, based on fully connected communication graphs. In [55], a DMHE scheme was proposed for non-linear constrained systems, using consensus and ensuring estimation error stability under suitable conditions. The authors in [56] presented a scalable distributed state estimation method for linear systems over peer-to-peer sensor networks, using MHE to handle constraints and ensure stable error dynamics under minimal connectivity requirements. Ref. [57] introduced a DMHE with event-triggered communication, optimizing data transmission in wireless sensor networks while ensuring bounded estimation error under strong connectivity and collective observability conditions. The authors in [58] considered a DMHE for linear systems in wireless sensor networks, employing event-triggered communication to reduce transmissions while ensuring stability and bounded estimation error under connectivity and observability conditions. In [59], the MHE-based estimation in a distributed power system was described, leveraging operator splitting to handle constraints, enhance robustness, and enable parallel computation for improved estimation accuracy. Ref. [60] addressed MHE for networked systems over relay channels, designing an estimator that ensures mean-square bounded error dynamics under packet loss conditions.
In this paper, a consensus-based MHE is proposed, for the situational awareness of a UAV formation operating in complex and uncertain environments. The fully distributed architecture allows each UAV to estimate the position and velocity of all other UAVs in the formation and supports the detection of static and dynamic targets. The use of local models with a reduced dimensionality decreases the computational burden while maintaining the accuracy and robustness of the estimate.
To prove the effectiveness of the proposed strategy, this paper includes a comprehensive sensitivity analysis of key parameters affecting algorithm performance, such as network topology and the number of consensus iterations. Numerical simulations are presented to evaluate the robustness of the algorithm against node faults and information losses.
The paper is organized as follows:
  • In Section 2, the problem statement is presented, detailing the mathematical model used to describe the dynamics of both the UAVs and the target. This section also provides a comprehensive description of the sensor configurations employed in the system, including their capabilities and limitations.
  • Section 3 introduces the proposed DMHE algorithm. The theoretical foundations of the method are explained, highlighting its integration with consensus-based techniques to enable a distributed and scalable estimation framework.
  • In Section 4, the numerical results are presented to evaluate the performance of the proposed approach. This includes a sensitivity analysis of key parameters, such as the network topology and the number of consensus iterations, along with a reliability assessment under conditions of node faults and information losses.

2. Problem Statement

We consider a formation of N autonomous UAVs involved in a mission of aerial surveillance within a prescribed area of interest. The primary objective is to detect the presence of N t non-collaborative targets, while ensuring global situational awareness, estimating the positions of UAVs within the formation.
To address these tasks, each UAV employs an MHE algorithm to process local sensor data and information shared by neighboring vehicles through dedicated communication channels.
The following assumptions have been considered:
  • Each UAV is equipped with its own flight control system to ensure stability and maneuverability.
  • For each UAV, the on-board Attitude and Heading Reference System (AHRS) provides reliable measurements of roll, pitch, and yaw with respect to the NED frame.
  • The frequency of the AHRS f A H R S is higher than the frequency of the DMHE f D M H E , with f A H R S = 10 f D M H E .
At a time instant k, the i-th UAV dynamics is defined by the evolution of its position in a inertial frame, e.g., the north–east–down (NED) frame, S i ( k ) = [ X i ( k ) , Y i ( k ) , Z i ( k ) ] T and its velocity V i ( k ) = [ V x i , V y i , V z i ] T , expressed in state-space form as follows:
ξ i ( k + 1 ) = A i ξ i ( k ) + B i u i ( k ) + υ i ( k )
where
-
A i and B i model the temporal evolution of UAV state. The explicit forms of A i and B i are defined as follows:
A i = 1 0 0 Δ t 0 0 0 1 0 0 Δ t 0 0 0 1 0 0 Δ t 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ; B i = 0 0 0 0 0 0 0 0 0 Δ t 0 0 0 Δ t 0 0 0 Δ t
-
ξ i ( k ) = [ S i ( k ) T , V i ( k ) T ] T is the state vector.
-
u i ( k ) = [ a x i ( k ) , a y i ( k ) , a z i ( k ) ] T is the input vector composed of the linear accelerations in the NED frame, obtained by applying a suitable rotation to the raw data provided by the accelerometers and subtracting the gravitational acceleration.
-
υ i ( k ) represents the process noise.
-
Δ t is an appropriate sampling time.
To navigate and accurately localize targets, each UAV is equipped with a suite of sensors, including a GPS receiver to determine the UAV position and velocity, and multiple transponders, ToF cameras, or millimeter-wave radars [61], which are used to measure relative distances between UAVs in the formation as well as the distances between UAVs and any target in the scenario.
It is worth noting that, in this framework, both UAVs and targets are assumed to be equipped with devices able to emit and detect signals for relative distance estimation. They can implement technologies such as time of flight (ToF), time of arrival (ToA), or difference ToA (DToA), in order to compute distances between vehicles. Active cooperative systems, like transponders [62] or avalanche transceivers (ARTVA) [63], require that vehicles are equipped with compatible transceivers to exchange signals and compute relative distances. On the other hand, non-cooperative devices, such as millimeter-wave radars [64] or ToF cameras [65], do not require targets to be equipped with dedicated transmitters. In this case, we assume that the identification of vehicles within the sensor range, together with data association, is given and reliable, as these aspects are beyond the scope of the present work. In this paper, we refer to a generic transponder definition.
Consequently, at any time instant k, each UAV acquires the following data array:
z ˜ i ( k ) = [ z ˜ i G P S ( k ) T , z ˜ i D ( k ) T ]
where z ˜ i GPS ( k ) represents the measurement provided by the GPS, and z ˜ i D ( k ) is the vector of distances from other vehicles and targets measured by transponders.
We suppose that GPS supplies data about position S i G P S ( k ) and velocity vectors V i G P S ( k ) ,
z ˜ i G P S ( k ) = [ S i G P S ( k ) T , V i G P S ( k ) T ] T + ν ˜ i G P S ( k )
where ν ˜ i G P S ( k ) denote the GPS noise.
On the other hand, we considered transponders capable of measuring the mutual distance d i , j ( k ) = S j ( k ) S i ( k ) 2 between the i-th UAV and any other vehicle j in its sensor range S i .
By denoting with d ¯ i the radius of S i , at time instant k, it is possible to define the set of visible UAVs, N i ( k ) , and the set of visible targets, N i t ( k ) , as follows:
N i ( k ) = { 1 , 2 , , N } { j : d i , j ( k ) d ¯ i }
N i t ( k ) = { N + 1 , N + 2 , , N + N t } { j : d i , j ( k ) d ¯ i }
Consequently, the vector z ˜ i D ( k ) has a length equal to the cardinality of the set N i ( k ) N i t ( k ) , including the overall distance measurements:
z ˜ i D ( k ) = j N i ( k ) N i t ( k ) d i , j ( k ) + ν ˜ i D ( k )
with ν ˜ i D ( k ) as the noise vector affecting any measurement.
At any time instant k, complete situational awareness is achieved if it is possible to estimate the global state vector Ξ ( k ) X R 6 ( N + N t ) :
Ξ ( k ) = ξ 1 ( k ) T , , ξ N ( k ) T , ξ N + 1 ( k ) T , , ξ N + N t ( k ) T T
where ξ i ( k ) = S i ( k ) T , V i ( k ) T T , with i = { 1 , , N } is the state of the i-th aircraft, defined by (1) and ξ N + j ( k ) = S N + j ( k ) T , V N + j ( k ) T T , with j { 1 , , N t } is the state of the j-th target, composed of the position vector S N + j ( k ) = X N + j ( k ) , Y N + j ( k ) , Z N + j ( k ) T and the velocity vector V N + j ( k ) = V x N + j ( k ) , V y N + j ( k ) , V z N + j ( k ) T , defined in an NED inertial reference frame as follows:
ξ N + j ( k + 1 ) = A N + j ξ N + j ( k ) + υ N + j ( k )
with
A N + j = A i with i = 1 N j = 1 N t
The overall system dynamics Ξ ( k ) can be defined as follows:
Ξ ( k + 1 ) = A Ξ ( k ) + B U ( k ) + Υ ( k )
where U ( k ) = u 1 ( k ) T , u 2 ( k ) T , . . . , u N ( k ) T T represents the global input vector and the global process noise is indicated with Υ ( k ) = υ 1 ( k ) T , . . . υ N ( k ) T , . . . , υ N + 1 ( k ) T , υ N + N t ( k ) T T .
The model dynamics matrices A and B are block-diagonal matrices defined as follows:
A = A 1 0 6 × 6 0 6 × 6 0 6 × 6 0 6 × 6 A N 0 6 × 6 0 n × n 0 6 × 6 0 6 × 6 A N + 1 0 6 × 6 0 6 × 6 0 6 × 6 0 6 × 6 A N + N t
B = B 1 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3 B N 0 6 × 3 0 n × n 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3 0 6 × 3
where 0 n × p denotes a null matrix with n rows and p columns, A i and B i are reported in (2), and A N + j is in (8).
For each vehicle i, the measurements vector z ˜ i ( k ) , defined by (3), (4), and (5), can be expressed as a function of the global state vector Ξ :
z ˜ i ( k ) = h Ξ ( k ) + ν ˜ i ( k )
where h ( · ) is the measurement function, mapping the global state vector Ξ ( k ) to the expected measurements for any vehicle i, and ν ˜ i ( k ) = [ ν ˜ i G P S T ( k ) , ν ˜ i D T ( k ) ] T represents the measurement noise.
At each time step k, Equation (12) can be linearized as follows:
z i ( k ) = H i Ξ ( k ) + ν i ( k )
where
z i ( k ) = z ˜ i ( k ) z ˜ i ( k 1 ) + H i Ξ ( k )
The matrix H i is given by
H i = C i , 1 C i , N C i , N + 1 C i , N + N t
where the elements of H i are defined as
C i , i ( k ) = h ξ i Ξ ( k 1 )
and
C i , j ( k ) = h ξ j Ξ ( k 1 ) j N i ( k ) N i t ( k ) 0 p × 6 o t h e r w i s e

3. Consensus on Moving Horizon Estimation

To improve the robustness of the fleet navigation system, a DMHE was introduced, combining the concepts of MHE and consensus theory.
The MHE is an optimization-based strategy that estimates unknown variables or parameters using a series of past measurements [66]. Typically, the MHE is employed as a state observer to estimate, at each time step k, the state dynamics in the time horizon { k n t , k n t + 1 , , k } by solving a constrained optimization problem. The objective is the minimization of the discrepancy between the model prediction and the measurements collected in the time window.
Adopting a centralized MHE algorithm, managed by a leader within the fleet or a ground station that gathers measurements from all vehicles, is impractical due to the rapid growth in design variables in the optimization process. Furthermore, if the central agent fails, the fleet would lose critical state information, with the consequent risk of failures in navigation-dependent functions, including guidance systems.
To address these issues, each vehicle is equipped with embedded systems capable of running a local version of MHE, dividing the estimation problem into N smaller optimization tasks. The solutions from each vehicle are then combined using consensus theory.
Consensus theory operates on the premise that a group of agents can reach an agreement on a global function value through local information exchanges. In a typical consensus estimation scenario, each agent estimates the global state and communicates partial measurements or state estimates with its neighbors. Therefore, inter-vehicle communication is crucial for achieving consensus.
The communication network among the collaborative agents can be represented as a sensor node network. At any given time instant k, the communication topology can be modeled as a directed graph G ( k ) = ( A , E ( k ) ) , where A = 1 , 2 , , N is the set of nodes corresponding to the aircraft and E ( k ) is the set of edges. Each edge ( i , j ) E ( k ) represents a communication link from UAV j to i, determined by their relative positions and communication range; consequently, at time instant k, UAV i can receive information from j only if ( i , j ) E ( k ) .
Define M i ( k ) = j : ( i , j ) E ( k ) as the set of neighboring vehicles for the i-th agent, and let η i ( k ) denote the local information available to the i-th aircraft at time k. Convergence is achieved in a finite number of consensus steps L when all agents agree on the same value of η i ( k ) [67]. At each iteration l, with l { 1 , 2 , , L } , the variables η i l ( k ) are computed as follows:
η i l ( k ) = j A W i , j ( k ) η j l 1 ( k )
where the coefficients W i , j ( k ) are computed using the Metropolis formula [68]:
W i , j = 1 1 + m a x { δ i , δ j } if ( i , j ) M i ( k ) 1 p M i ( k ) W i , p if i = j 0 otherwise
Here, δ i and δ j denote the degrees of nodes i and j, respectively.
For convergence, the communication graph G ( k ) must remain connected at all times k, and at least L > 1 consensus iterations must be performed.

3.1. Distributed Moving Horizon Estimation

Denote with Ξ ^ i ( k | k ) the estimate of the global state vector performed by i-th UAV at time k using all the information acquired until time k.
The goal of the proposed DMHE is to ensure that each aircraft has a complete situational awareness and, thus, an estimate of the global state vector Ξ ^ i ( p | k ) , with p = k n t , , k .
In [56], each agent solves an MHE problem at every consensus step and then exchanges information about the estimates with its neighbors according to the consensus protocol.
To reduce the computational burden, each aircraft i does not solve an optimization problem involving all the design variables, but it estimates only the local state vector which includes its own state as well as the position and velocity of visible targets:
Ξ i ( k ) = ξ i ( k ) T , j N i t ( k ) ξ j ( K ) T T
The local state vector is derived from the global state vector as follows:
Ξ i ( k ) = T i ( k ) Ξ ( k )
where T i ( k ) is used to pick specific states or their linear combinations from the global state vector. The number of columns of T i ( k ) depends on the targets visible to the i-th aircraft at time k. This matrix consists of elements equal to 1 or 0, depending on the states to be selected, and varies over time according to N i t ( k ) .
Figure 1 illustrates the communication between two generic nodes of the formation, highlighting the local estimation flow during consensus.

3.2. Local MHE

Consider a sliding window spanning n t + 1 time steps into the past. The measurements acquired by the i-th UAV within this sliding window are given by
Z i ( k ) = z i ( k n t ) T , z i ( k n t + 1 ) T , , z i ( k ) T T
At each time step k, for each consensus iteration l, with l { 1 , , L } , a local constrained optimization problem is solved to compute an estimate of the state vector Ξ i ( k ) , denoted as Ξ i i , l ( k ) .
Given a local function J i k , l , defined as
J i k , l = p = k n t k | | Ξ i i , l ( p | k ) Ξ ¯ i ( p ) | | Q i 2 + p = k n t k | | H i T i + ( p ) Ξ i i , l ( p | k ) + Y i ( m ) | | R i 2 + Ξ i i , l ( k n t | k ) Ξ ¯ i ( k n t ) P i 2
with three main components:
  • State dynamics: the first term addresses the state dynamics, weighted by a positive semi-definite matrix Q i .
  • Measurement-model consistency: the second term accounts for the discrepancy between the measured data and the model output, weighted by a positive definite matrix R i , which corresponds to the inverse of the measurement noise covariance.
  • Arrival cost: the third term penalizes the error between the estimated state Ξ i i , k ( k n t | k ) and the prediction Ξ ¯ i ( k n t ) , weighted by a suitable positive definite matrix P i , contributing to the arrival cost [50,52,69,70].
The optimization problem is defined as follows:
min Ξ i i , l ( k n t ) J i k , l
subject to the following constraints:
Ξ ¯ i ( p + 1 ) = A i Ξ ¯ i ( p ) + B i u i ( p ) , p = k n t , , k
Ξ ¯ i ( k n t 1 ) = Ξ ^ i i ( k n t 1 | k 1 )
Ξ i i , l ( p | k ) X j , p = k n t , k n t + 1 , k , j N i t ( k )
where the matrix A i = T i ( p ) A T i ( p ) + is obtained by extracting the rows and columns corresponding to the local state from the global state matrix A , whereas B i = T i ( p ) B is the block matrix of B relative to the state vector Ξ i and the input vector u i . T i ( p ) + indicates the pseudoinverse matrix of T i ( p ) .
Dynamics constraints (25) define the dynamics of a priori predicted state Ξ ¯ i ( p ) in the time window { k n t 1 , , k } .
The initial condition (26) Ξ ^ i ( k n t 1 | k 1 ) is obtained by selecting the suitable components of the solution at time step k 1 , after completing the consensus iterations, as Ξ ^ i ( k n t 1 | k 1 ) = Ξ i i , L ( k n t 1 | k 1 ) .
Inequality constraints (27) restrict the possible locations of the targets to be within the sensor range S i .
The optimization problem represents a quadratic programming (QP) problem with linear constraints, which can be solved efficiently using well-established numerical solvers [71].

3.3. Consensus Iteration

After solving the problem defined in Section 3.2, each vehicle computes an updated estimate of the global state vector as follows:
Ξ i , l ( k n t | k ) = Ξ i , l 1 ( k n t | k ) T i + ( k n t ) Ξ i i , l 1 ( k n t | k ) + T i + ( k n t ) Ξ i i , l ( k n t | k )
Vehicle i exchanges its vector Ξ i , l ( k n t | k ) with the neighbors j M i ( k ) according to the consensus paradigm [72], to achieve the agreement on the global state estimation across all UAVs:
Ξ i , l + 1 ( k n t | k ) = j = 1 N W i , j Ξ j , l ( k n t | k )
The final estimate of the global state is obtained at the end of the consensus iterations as follows:
Ξ ^ i ( k n t | k ) = Ξ i , L ( k n t | k )
The proposed algorithm is detailed in pseudocode in Algorithm 1 and depicted in Figure 2.
Algorithm 1: Decentralized Moving Horizon Estimator (DMHE)
Drones 09 00127 i001

4. Results

In this section, four distinct test cases are illustrated, to evaluate the performance of the proposed algorithm. They involve a swarm of N = 9 UAVs operating over a designated area of interest.
The first test case was designed to carry out a sensitivity analysis, aiming to investigate how variations in key parameters, such as the number of consensus iterations (L), the number of receding iterations ( n t ), and different communication link schemes ( E ), affect the algorithm behavior.
The objective of the second test case was to assess the performance of the algorithm in the presence of communication faults, by analyzing the effects of two distinct communication interruptions among the collaborative agents.
Test cases #3 and #4 were designed to evaluate the performance of the algorithm in estimating the position and the speed of a non-collaborative target.
The numerical simulations were carried out in a Matlab environment, considering a simulation period of t = 250 s. In every test, at the beginning of the simulation, each aircraft initializes its global state estimate with random values. This assumption was useful for assessing the convergence of the algorithm toward an accurate estimate and its ability to achieve consensus.
Each simulation considers white Gaussian noise affecting GPS and the transponder, as summarized in Table 1.
To assess the performance of the proposed algorithm, two indicators were defined: the standard deviation (SD) and the average error [73]. Assuming that S ^ j i ( k ) indicates the position of the j-th agent estimated by the aircraft i, the estimation error at time instant k is given by
e i S j ( k ) = S ^ j i ( k ) S j ( k )
The average error μ i S j in a time window composed by n k time steps is
μ i S j = 1 n k k = 1 n k e i S j ( k )
Similarly, it is possible to define the standard deviation σ i S j with respect to the average error as follows:
σ i S j = 1 ( n k 1 ) k = 1 n k e i S j ( k ) μ i S j 2

4.1. Test Case #1

This subsection shows the results of a sensitivity analysis conducted considering a scenario involving a formation of nine quadrotor UAVs, with eight agents flying in the same direction at a constant speed of 2 m/s, while UAV #1 serves as a communication base for the rest of the swarm, remaining in hovering at a fixed point in the operational scenario. During the flight, drones perform several maneuvers, both in the horizontal and the vertical plane, changing the altitude of the drones of the swarm, to assess the quality of the estimates even during dynamic phases. Table 2 summarizes the initial position of each drone of the swarm.
This first test case is useful for evaluating how the operational parameters, such as the communication link schemes, the number of consensus steps, and the number of horizon steps, influence the performance of the proposed algorithm.
The sensitivity analysis considers four configurations for the number of consensus steps ( L ( 2 , 3 , 5 , 10 , 20 ) ), three configurations for the number of receding steps ( n t ( 2 , 3 , 4 ) ), and two communication schemes ( E 2 , E 4 ), characterized by an increasing number of communication links, as illustrated in Figure 3.
The results of the numerical simulations, in terms of the estimation error e 1 S 2 made by UAV #1 in estimating the trajectory of UAV #2, are presented in Figure 4. Each figure corresponds to a different moving time horizon ( n t = 2 , n t = 3 , and n t = 4 ) and shows error bars that represent the average error and the standard deviation for each communication scheme E and number of consensus steps L.
It is worth noting that increasing the size of the moving time horizon does not significantly impact performance, which is already satisfactory for n t = 2 , given the same controller tuning parameters. At the same time, as expected, the communication scheme E 2 requires a higher number of communication steps. However, beyond L = 5 , further improvements in the estimates become negligible.
To choose the best configuration trade-off, an evaluation of the computational load on the CPU is needed. Figure 5 shows the computation time per estimation step for all the previously discussed configurations, varying the estimation window size ( n t ) and the number of consensus steps (L). From the analysis of the figures, the configuration with n t = 2 and L = 5 emerges as the optimal trade-off. This choice ensures a low estimation error while maintaining a reasonable computational cost, with the CPU load peaking at 85 % under a sampling time of 1 s. The results presented in the following sections were obtained using such optimal configuration.
As a representative example, Figure 6 illustrates the estimated trajectories of UAV #1, UAV #2, and UAV #9 as observed from UAV #1, alongside their actual trajectories. It is worth noting that, under the E 2 communication scheme, only UAV #2 and UAV #3 communicate directly with UAV #1, as depicted in Figure 3. Nevertheless, UAV #1 successfully estimates the trajectories of both the visible UAV (UAV #2) and the non-visible UAV (UAV #9), achieving a limited estimation error, even in the presence of maneuvers. The scenario includes left and right turning maneuvers in the horizontal plane at t = 40 s and t = 80 s, followed by pull-up and pull-down maneuvers in the vertical plane at t = 120 s and t = 140 s, each lasting 20 s.

4.2. Test Case #2

The following test case considers the same initial formation configuration used in test case #1 (see Table 2). Here, all UAVs fly along the same direction at a constant speed of 2 m/s, performing a series of maneuvers in the horizontal plane before reaching the destination point, as highlighted in Figure 7.
The primary objective of this test case was to evaluate the ability of the proposed algorithm to recover the estimation of the UAVs trajectories following interruptions in the inter-agents communication. In particular, to emphasize this aspect, this scenario introduces communication link failures between UAVs during flight due to obstacle avoidance at two specific time intervals: the first at t 1 = 50 s and the second at t 2 = 100 s, both with a duration of Δ t = 10 s. Figure 8 depicts the communication link scheme during the communication interruption. In particular, when data transmission between UAV #4 and UAV #6, and between UAV #5 and UAV #7 is interrupted, the two subformations are reorganized into two distinct cycle graphs.
Figure 9 shows the estimated coordinates of UAV #8 as computed by UAV #1, compared with the actual trajectory. The estimation error increases during the two designated communication interruptions and effectively decreases once the communication link is restored.

4.3. Test Case #3

Test case #3 was designed to assess the ability of the proposed algorithm to estimate the position of a fixed target placed at T = ( 500 , 50 , 0 ) m in the operational scenario. Here, a formation of nine UAVs is considered, with eight UAVs that fly along the same direction at a constant speed of 5 m/s, with different altitudes above the terrain (see Table 2), while UAV #1 is in hovering at a fixed point, serving as a communication base. Every agent can communicate with its neighbors to form a cycle graph, as depicted in Figure 10, and it is equipped with a further transponder to detect the presence of the fixed target.
Table 3 includes information about the bias and noise covariance of the additional transponder [63].
Figure 11 shows the estimation error of the coordinates of the target, comparing the actual value and the estimates made by UAV #1. Starting from random initial values, the estimation converges to the actual value in less than 10 s. The presence of a limited estimation error is further confirmed by the analysis of Table 4, which shows the average error and the standard deviation of the estimated position.

4.4. Test Case #4

The main aim of test case #4 was to assess the ability of the algorithm to estimate the position of a moving target, initially placed at T = ( 500 , 50 , 0 ) m and moving with a velocity vector V T = ( 0.5 , 0.5 , 0 ) m/s. The formation is composed of nine UAVs and the initial positions of the agents of the swarm are resumed in Table 2.
Figure 12 depicts the estimation error of the target coordinates as computed by UAV #1. As shown, the estimation converges to the actual value in less than 10 s. However, as shown in Figure 12b, the estimation error grows during the first 100 s and then drops below one meter once the formation moves directly above the target. This behavior is also evident in Table 5, which reports the average error and the standard deviation of the error made by UAV #1 in the estimation of the target trajectory.

5. Conclusions

This study presented the results of the development of a DMHE for a formation of UAVs, combining consensus theory and receding horizon estimation techniques. The primary objective was to distribute computational workload across a network of UAVs while minimizing network connectivity requirements. The DMHE was designed to overcome the limitations of traditional Kalman filtering. Additionally, the algorithm was tailored to facilitate the identification of a target position, providing robust situational awareness within a distributed UAV swarm.
The results proved the algorithm’s ability to achieve consensus on the positions of both the UAVs and the target, even under challenging conditions such as communication interruptions. Several tests were carried out to evaluate the performance of the algorithm. A sensitivity analysis of the main configuration parameters showed the ability of the DMHE to successfully estimate the trajectories of UAVs, despite limited communication links, using several configurations in terms of consensus steps and receding horizon size. The results highlight the configuration n t = 2 and L = 5 as an optimal trade-off, balancing estimation accuracy and computational cost.
The DMHE represents an interesting solution for distributed estimation in UAV swarms, offering reliable performance and robustness to communication faults. Its integration into real-world applications could enhance situational awareness and coordination in autonomous systems.

Author Contributions

Conceptualization, S.R.B., E.D. and I.N.; data curation, S.R.B. and I.N.; formal analysis, S.R.B., E.D. and I.N.; investigation, S.R.B., E.D. and I.N.; methodology, S.R.B., E.D. and I.N.; software, S.R.B., E.D. and I.N.; supervision, E.D.; validation, S.R.B., E.D. and I.N.; writing—original draft, S.R.B., E.D. and I.N.; writing—review and editing, S.R.B., E.D. and I.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the research project—ID:P2022XER7W “CHEMSYS: Cooperative Heterogeneous Multi-drone SYStem for disaster prevention and first response” granted by the Italian Ministry of University and Research (MUR) within the PRIN 2022 PNRR program, funded by the European Union through the PNRR program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHRSAttitude and Heading Reference System
AOAAngle of Arrival
CPUCentral Processing Unit
DKFDecentralized Kalman Filter
DMHEDistributed Moving Horizon Estimation
EKFExtended Kalman Filter
GPSGlobal Positioning System
HCMCI-KFHybrid Consensus on Measurements and Consensus on Information Kalman Filter
INSInsertial Navigation System
IoTInternet of Things
KFKalman Filter
KCFKalman Consensus Filter
MHEMoving Horizon Estimation
MLEMaximum Likelihood Estimation
MPCModel Predictive Control
MTSMulti Time Scale
NCVNearly Constant Velocity
NEDNorth East Down
RSSReceived Signal Strength
SDStandard Deviation
STSSingle Time Scale
ToATime of Arrival
ToFTime of Flight
UAVUnmanned Aerial Vehicle

References

  1. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the unmanned aerial vehicles (UAVs): A comprehensive review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  2. Phadke, A.; Medrano, F.A. Increasing Operational Resiliency of UAV Swarms: An Agent-Focused Search and Rescue Framework. Aerosp. Res. Commun. 2024, 1, 12420. [Google Scholar] [CrossRef]
  3. Lee, W.; Kim, D. Autonomous shepherding behaviors of multiple target steering robots. Sensors 2017, 17, 2729. [Google Scholar] [CrossRef]
  4. Sun, F.; Turkoglu, K. Distributed real-time non-linear receding horizon control methodology for multi-agent consensus problems. Aerosp. Sci. Technol. 2017, 63, 82–90. [Google Scholar] [CrossRef]
  5. Guo, X.; Lu, J.; Alsaedi, A.; Alsaadi, F.E. Bipartite consensus for multi-agent systems with antagonistic interactions and communication delays. Phys. A Stat. Mech. Its Appl. 2018, 495, 488–497. [Google Scholar] [CrossRef]
  6. Bassolillo, S.R.; D’Amato, E.; Notaro, I.; Blasi, L.; Mattei, M. Decentralized mesh-based model predictive control for swarms of UAVs. Sensors 2020, 20, 4324. [Google Scholar] [CrossRef] [PubMed]
  7. Simonetto, A.; Keviczky, T.; Babuška, R. Distributed nonlinear estimation for robot localization using weighted consensus. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, 3–8 May 2010; pp. 3026–3031. [Google Scholar]
  8. Mitchell, H.B. Multi-Sensor Data Fusion: An Introduction; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  9. Su, K.; Qian, F. Multi-UAV Cooperative Searching and Tracking for Moving Targets Based on Multi-Agent Reinforcement Learning. Appl. Sci. 2023, 13, 11905. [Google Scholar] [CrossRef]
  10. Xia, Z.; Du, J.; Wang, J.; Jiang, C.; Ren, Y.; Li, G.; Han, Z. Multi-agent reinforcement learning aided intelligent UAV swarm for target tracking. IEEE Trans. Veh. Technol. 2021, 71, 931–945. [Google Scholar] [CrossRef]
  11. Olfati-Saber, R.; Jalalkamali, P. Collaborative target tracking using distributed Kalman filtering on mobile sensor networks. In Proceedings of the 2011 American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; pp. 1100–1105. [Google Scholar]
  12. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1333–1364. [Google Scholar]
  13. Deghat, M.; Shames, I.; Anderson, B.D.; Yu, C. Localization and circumnavigation of a slowly moving target using bearing measurements. IEEE Trans. Autom. Control 2014, 59, 2182–2188. [Google Scholar] [CrossRef]
  14. Olfati-Saber, R.; Sandell, N.F. Distributed tracking in sensor networks with limited sensing range. In Proceedings of the 2008 American Control Conference, Seattle, WA, USA, 11–13 June 2008; pp. 3157–3162. [Google Scholar]
  15. Orton, M.; Fitzgerald, W. A Bayesian approach to tracking multiple targets using sensor arrays and particle filters. IEEE Trans. Signal Process. 2002, 50, 216–223. [Google Scholar] [CrossRef]
  16. Singh, J.; Madhow, U.; Kumar, R.; Suri, S.; Cagley, R. Tracking multiple targets using binary proximity sensors. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks, Cambridge, MA, USA, 25–27 April 2007; pp. 529–538. [Google Scholar]
  17. Khan, M.; Heurtefeux, K.; Mohamed, A.; Harras, K.A.; Hassan, M.M. Mobile target coverage and tracking on drone-be-gone UAV cyber-physical testbed. IEEE Syst. J. 2017, 12, 3485–3496. [Google Scholar] [CrossRef]
  18. Ham, A. Drone-based material transfer system in a robotic mobile fulfillment center. IEEE Trans. Autom. Sci. Eng. 2019, 17, 957–965. [Google Scholar] [CrossRef]
  19. Benevento, A.; Santos, M.; Notarstefano, G.; Paynabar, K.; Bloch, M.; Egerstedt, M. Multi-robot coordination for estimation and coverage of unknown spatial fields. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 7740–7746. [Google Scholar]
  20. Chen, Q.; Sun, Y.; Zhao, M.; Liu, M. Consensus-based cooperative formation guidance strategy for multiparafoil airdrop systems. IEEE Trans. Autom. Sci. Eng. 2020, 18, 2175–2184. [Google Scholar] [CrossRef]
  21. Shafiei, M.; Vazirpour, N. The approach of partial stabilisation in design of discrete-time robust guidance laws against manoeuvering targets. Aeronaut. J. 2020, 124, 1114–1127. [Google Scholar] [CrossRef]
  22. Taghieh, A.; Shafiei, M.H. Observer-based robust model predictive control of switched nonlinear systems with time delay and parametric uncertainties. J. Vib. Control 2021, 27, 1939–1955. [Google Scholar] [CrossRef]
  23. Koivisto, M.; Hakkarainen, A.; Costa, M.; Talvitie, J.; Heiska, K.; Leppänen, K.; Valkama, M. Continuous high-accuracy radio positioning of cars in ultra-dense 5G networks. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 115–120. [Google Scholar]
  24. Haber, A.; Molnar, F.; Motter, A.E. State observation and sensor selection for nonlinear networks. IEEE Trans. Control Netw. Syst. 2017, 5, 694–708. [Google Scholar] [CrossRef]
  25. Zhao, Z.; Li, T.R.; Jilkov, V.P. Best linear unbiased filtering with nonlinear measurements for target tracking. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1324–1336. [Google Scholar] [CrossRef]
  26. He, D.; Qiao, Y.; Chan, S.; Guizani, N. Flight security and safety of drones in airborne fog computing systems. IEEE Commun. Mag. 2018, 56, 66–71. [Google Scholar] [CrossRef]
  27. Kehoe, B.; Patil, S.; Abbeel, P.; Goldberg, K. A survey of research on cloud robotics and automation. IEEE Trans. Autom. Sci. Eng. 2015, 12, 398–409. [Google Scholar] [CrossRef]
  28. Chen, W.; Liu, J.; Guo, H. Achieving robust and efficient consensus for large-scale drone swarm. IEEE Trans. Veh. Technol. 2020, 69, 15867–15879. [Google Scholar] [CrossRef]
  29. Abdelmawgoud, A.; Pack, D.; Ruble, Z. Consensus-based distributed estimation in multi-agent systems with time delay. In Proceedings of the 2018 World Automation Congress (WAC), Stevenson, WA, USA, 3–6 June 2018; pp. 1–5. [Google Scholar]
  30. DeGroot, M.H. Reaching a consensus. J. Am. Stat. Assoc. 1974, 69, 118–121. [Google Scholar] [CrossRef]
  31. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef]
  32. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  33. Doostmohammadian, M. Single-bit consensus with finite-time convergence: Theory and applications. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3332–3338. [Google Scholar] [CrossRef]
  34. Boukhobza, T.; Hamelin, F.; Martinez-Martinez, S.; Sauter, D. Structural analysis of the partial state and input observability for structured linear systems: Application to distributed systems. Eur. J. Control 2009, 15, 503–516. [Google Scholar] [CrossRef]
  35. Kar, S.; Moura, J.M.; Ramanan, K. Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication. IEEE Trans. Inf. Theory 2012, 58, 3575–3605. [Google Scholar] [CrossRef]
  36. Das, S.; Moura, J.M. Distributed Kalman filtering with dynamic observations consensus. IEEE Trans. Signal Process. 2015, 63, 4458–4473. [Google Scholar] [CrossRef]
  37. Zhong, X.; Mohammadi, A.; Premkumar, A.B.; Asif, A. A distributed particle filtering approach for multiple acoustic source tracking using an acoustic vector sensor network. Signal Process. 2015, 108, 589–603. [Google Scholar] [CrossRef]
  38. Ennasr, O.; Xing, G.; Tan, X. Distributed time-difference-of-arrival (TDOA)-based localization of a moving target. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA 12–14 December 2016; pp. 2652–2658. [Google Scholar]
  39. He, X.; Hu, C.; Hong, Y.; Shi, L.; Fang, H.T. Distributed Kalman filters with state equality constraints: Time-based and event-triggered communications. IEEE Trans. Autom. Control 2019, 65, 28–43. [Google Scholar] [CrossRef]
  40. He, X.; Ren, X.; Sandberg, H.; Johansson, K.H. How to secure distributed filters under sensor attacks. IEEE Trans. Autom. Control 2021, 67, 2843–2856. [Google Scholar] [CrossRef]
  41. Chen, W.; Liu, J.; Guo, H.; Kato, N. Toward robust and intelligent drone swarm: Challenges and future directions. IEEE Netw. 2020, 34, 278–283. [Google Scholar] [CrossRef]
  42. Bassolillo, S.R.; Blasi, L.; D’Amato, E.; Mattei, M.; Notaro, I. Decentralized Triangular Guidance Algorithms for Formations of UAVs. Drones 2022, 6, 7. [Google Scholar] [CrossRef]
  43. Olfati-Saber, R. Distributed Kalman filter with embedded consensus filters. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 12–15 December 2005; pp. 8179–8184. [Google Scholar]
  44. Olfati-Saber, R. Kalman-consensus filter: Optimality, stability, and performance. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 7036–7042. [Google Scholar]
  45. Zhou, Z.; Fang, H.; Hong, Y. Distributed estimation for moving target based on state-consensus strategy. IEEE Trans. Autom. Control 2013, 58, 2096–2101. [Google Scholar] [CrossRef]
  46. Wang, S.; Ren, W.; Li, Z. Information-driven fully distributed Kalman filter for sensor networks in presence of naive nodes. arXiv 2014, arXiv:1410.0411. [Google Scholar]
  47. Allgöwer, F.; Badgwell, T.A.; Qin, J.S.; Rawlings, J.B.; Wright, S.J. Nonlinear predictive control and moving horizon estimation—An introductory overview. In Advances in Control; Springer: London, UK, 1999; pp. 391–449. [Google Scholar]
  48. Franze, G.; Mattei, M.; Ollio, L.; Scordamaglia, V. A robust constrained model predictive control scheme for norm-bounded uncertain systems with partial state measurements. Int. J. Robust Nonlinear Control 2019, 29, 6105–6125. [Google Scholar] [CrossRef]
  49. Allan, D.A.; Rawlings, J.B. Moving horizon estimation. In Handbook of Model Predictive Control; Springer International Publishing: Cham, Germany, 2019; pp. 99–124. [Google Scholar]
  50. Rao, C.V.; Rawlings, J.B.; Lee, J.H. Constrained linear state estimation—A moving horizon approach. Automatica 2001, 37, 1619–1628. [Google Scholar] [CrossRef]
  51. Alessandri, A.; Baglietto, M.; Battistelli, G. Receding-horizon estimation for discrete-time linear systems. IEEE Trans. Autom. Control 2003, 48, 473–478. [Google Scholar] [CrossRef]
  52. Alessandri, A.; Baglietto, M.; Battistelli, G. Moving-horizon state estimation for nonlinear discrete-time systems: New stability results and approximation schemes. Automatica 2008, 44, 1753–1765. [Google Scholar] [CrossRef]
  53. Ferrari-Trecate, G.; Mignone, D.; Morari, M. Moving horizon estimation for hybrid systems. IEEE Trans. Autom. Control 2002, 47, 1663–1676. [Google Scholar] [CrossRef]
  54. Farina, M.; Ferrari-Trecate, G.; Scattolini, R. Distributed moving horizon estimation for linear constrained systems. IEEE Trans. Autom. Control 2010, 55, 2462–2475. [Google Scholar] [CrossRef]
  55. Farina, M.; Ferrari-Trecate, G.; Scattolini, R. Distributed moving horizon estimation for nonlinear constrained systems. Int. J. Robust Nonlinear Control 2012, 22, 123–143. [Google Scholar] [CrossRef]
  56. Battistelli, G. Distributed moving-horizon estimation with arrival-cost consensus. IEEE Trans. Autom. Control 2018, 64, 3316–3323. [Google Scholar] [CrossRef]
  57. Huang, Z.; Lv, W.; Liu, C.; Xu, Y.; Rutkowski, L.; Huang, T. Event-triggered distributed moving horizon estimation over wireless sensor networks. IEEE Trans. Ind. Inform. 2024, 20, 4218–4226. [Google Scholar] [CrossRef]
  58. Yu, D.; Xia, Y.; Zhai, D.H. Distributed moving-horizon estimation with event-triggered communication over sensor networks. IEEE Trans. Autom. Control 2023, 68, 7982–7988. [Google Scholar] [CrossRef]
  59. Kim, J.; Kang, J.H.; Bae, J.; Lee, W.; Kim, K.K.K. Distributed moving horizon estimation via operator splitting for automated robust power system state estimation. IEEE Access 2021, 9, 90428–90440. [Google Scholar] [CrossRef]
  60. Zou, L.; Wang, Z.; Shen, B.; Dong, H. Moving horizon estimation over relay channels: Dealing with packet losses. Automatica 2023, 155, 111079. [Google Scholar] [CrossRef]
  61. Rajab, K.Z.; Wu, B.; Alizadeh, P.; Alomainy, A. Multi-target tracking and activity classification with millimeter-wave radar. Appl. Phys. Lett. 2021, 119, 034101. [Google Scholar] [CrossRef]
  62. Yeste-Ojeda, O.A.; Zambrano, J.; Landry, R. Design of integrated Mode S transponder, ADS-B and distance measuring equipment transceivers. In Proceedings of the 2016 Integrated Communications Navigation and Surveillance (ICNS), Herndon, VA, USA, 19–21 April 2016; pp. 4E1–4E8. [Google Scholar]
  63. Bassolillo, S.R.; D’Amato, E.; Mattei, M.; Notaro, I. Distributed navigation in emergency scenarios: A case study on post-avalanche search and rescue using drones. Appl. Sci. 2023, 13, 11186. [Google Scholar] [CrossRef]
  64. Dogru, S.; Marques, L. Pursuing drones with drones using millimeter wave radar. IEEE Robot. Autom. Lett. 2020, 5, 4156–4163. [Google Scholar] [CrossRef]
  65. Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time-of-Flight Cameras: Principles, Methods and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  66. Kühl, P.; Diehl, M.; Kraus, T.; Schlöder, J.P.; Bock, H.G. A real-time algorithm for moving horizon state and parameter estimation. Comput. Chem. Eng. 2011, 35, 71–83. [Google Scholar] [CrossRef]
  67. Saber, R.O.; Murray, R.M. Consensus protocols for networks of dynamic agents. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003; Volume 2, pp. 951–956. [Google Scholar]
  68. Xiao, L.; Boyd, S.; Lall, S. A scheme for robust distributed sensor fusion based on average consensus. In Proceedings of the IPSN 2005 Fourth International Symposium on Information Processing in Sensor Networks, Los Angeles, CA, USA, 25–27 April 2005; pp. 63–70. [Google Scholar]
  69. Muske, K.R.; Rawlings, J.B.; Lee, J.H. Receding horizon recursive state estimation. In Proceedings of the 1993 American Control Conference, San Antonio, TX, USA, 9–12 May 1993; pp. 900–904. [Google Scholar]
  70. Alessandri, A.; Baglietto, M.; Battistelli, G.; Zavala, V. Advances in moving horizon estimation for nonlinear systems. In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 5681–5688. [Google Scholar]
  71. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  72. Battistelli, G.; Chisci, L. Stability of consensus extended Kalman filter for distributed state estimation. Automatica 2016, 68, 169–178. [Google Scholar] [CrossRef]
  73. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the communication flow between UAVs during consensus.
Figure 1. Graphical representation of the communication flow between UAVs during consensus.
Drones 09 00127 g001
Figure 2. Graphical representation of the DMHE flow for the i-th UAV.
Figure 2. Graphical representation of the DMHE flow for the i-th UAV.
Drones 09 00127 g002
Figure 3. Test case #1: Communication schemes among the drones. In E 2 , each aircraft communicates with two neighbors; in E 4 , each aircraft communicates with four neighbors.
Figure 3. Test case #1: Communication schemes among the drones. In E 2 , each aircraft communicates with two neighbors; in E 4 , each aircraft communicates with four neighbors.
Drones 09 00127 g003
Figure 4. Test case #1: Graphical comparison of the estimation error made by UAV #1 in estimating the position of UAV #2 as a function of the consensus steps (L) and the communication link schemes ( E ) with n t = 2 , 3 , 4 . The middle point of each bar represents the average error, e 1 S 2 , while the endpoints correspond to the minimum and maximum values of the standard deviation σ 1 S 2 .
Figure 4. Test case #1: Graphical comparison of the estimation error made by UAV #1 in estimating the position of UAV #2 as a function of the consensus steps (L) and the communication link schemes ( E ) with n t = 2 , 3 , 4 . The middle point of each bar represents the average error, e 1 S 2 , while the endpoints correspond to the minimum and maximum values of the standard deviation σ 1 S 2 .
Drones 09 00127 g004
Figure 5. Test case #1: Graphical comparison of the DMHE computation time per estimation step, varying the estimation window size ( n t ) and the number of consensus steps (L). The numerical simulations were performed on a laptop equipped with an Apple M3 processor and 16 GB of RAM.
Figure 5. Test case #1: Graphical comparison of the DMHE computation time per estimation step, varying the estimation window size ( n t ) and the number of consensus steps (L). The numerical simulations were performed on a laptop equipped with an Apple M3 processor and 16 GB of RAM.
Drones 09 00127 g005
Figure 6. Test case #1: Estimated trajectories of UAV #1, UAV #2, and UAV #9 made by UAV #1, considering E = E 2 , L = 5 consensus steps and a moving horizon with n t = 2 .
Figure 6. Test case #1: Estimated trajectories of UAV #1, UAV #2, and UAV #9 made by UAV #1, considering E = E 2 , L = 5 consensus steps and a moving horizon with n t = 2 .
Drones 09 00127 g006
Figure 7. Test case #2: UAVs reference trajectories.
Figure 7. Test case #2: UAVs reference trajectories.
Drones 09 00127 g007
Figure 8. Test case #2: Communication link scheme during the interruption in the data transmission between UAV #4 and UAV #6 and between UAV #5 and UAV #7.
Figure 8. Test case #2: Communication link scheme during the interruption in the data transmission between UAV #4 and UAV #6 and between UAV #5 and UAV #7.
Drones 09 00127 g008
Figure 9. Test case #2: Comparison between the estimated X,Y,Z coordinates of UAV #8 and its actual coordinates.
Figure 9. Test case #2: Comparison between the estimated X,Y,Z coordinates of UAV #8 and its actual coordinates.
Drones 09 00127 g009
Figure 10. Test case #3: Starting position of the UAVs and target. The solid black lines represent the communication link between aircraft.
Figure 10. Test case #3: Starting position of the UAVs and target. The solid black lines represent the communication link between aircraft.
Drones 09 00127 g010
Figure 11. Test case #3: Estimation error of target coordinates e 1 X T , e 1 Y T , and e 1 Z T as computed by UAV #1.
Figure 11. Test case #3: Estimation error of target coordinates e 1 X T , e 1 Y T , and e 1 Z T as computed by UAV #1.
Drones 09 00127 g011
Figure 12. Test case #4: Estimation error of target coordinates e 1 X T , e 1 Y T , and e 1 Z T as computed by UAV #1.
Figure 12. Test case #4: Estimation error of target coordinates e 1 X T , e 1 Y T , and e 1 Z T as computed by UAV #1.
Drones 09 00127 g012
Table 1. Sensor biases and measurement noise covariance matrices considered in the simulations.
Table 1. Sensor biases and measurement noise covariance matrices considered in the simulations.
DescriptionValue
GPS bias (m) [ 2 , 2 , 4 ] T
Transponder bias (m)4 × 10 2
GPS noise covariance (m2) diag ( [ 4 × 10 1 , 4 × 10 1 , 1 ] T )
Transponder noise covariance (m2) I 9 × 10 4
Table 2. UAVs starting positions.
Table 2. UAVs starting positions.
Agent X ( 0 ) (m) Y ( 0 ) (m) Z ( 0 ) (m)
UAV #10020
UAV #2−20−2022
UAV #3−202022
UAV #4−40−4024
UAV #5−404024
UAV #6−60−6018
UAV #7−606018
UAV #8−80−8016
UAV #9−808016
Table 3. Test case #3: Bias and noise covariance of the additional transponder.
Table 3. Test case #3: Bias and noise covariance of the additional transponder.
DescriptionValue
Target transponder bias (m)4 × 10 4
Target transponder noise covariance (m2) I 9 × 10 4
Table 4. Test case #3: Estimation error of the target position. The mean value, μ , and the standard deviation, σ , are evaluated in the time window 30 s t 180 s.
Table 4. Test case #3: Estimation error of the target position. The mean value, μ , and the standard deviation, σ , are evaluated in the time window 30 s t 180 s.
σ 1 X T [ m ] σ 1 Y T [ m ] σ 1 Z T [ m ] μ 1 X T [ m ] μ 1 Y T [ m ] μ 1 Z T [ m ]
0.26290.59850.03890.05650.08970.0045
Table 5. Test case #4: Estimation error of the target position. The standard deviation, μ , and the standard deviation, σ , are evaluated in the time window 150 s t 200 s.
Table 5. Test case #4: Estimation error of the target position. The standard deviation, μ , and the standard deviation, σ , are evaluated in the time window 150 s t 200 s.
σ 1 X T [ m ] σ 1 Y T [ m ] σ 1 Z T [ m ] μ 1 X T [ m ] μ 1 Y T [ m ] μ 1 Z T [ m ]
0.21710.52380.03550.10051.33360.0367
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bassolillo, S.R.; D’Amato, E.; Notaro, I. A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations. Drones 2025, 9, 127. https://doi.org/10.3390/drones9020127

AMA Style

Bassolillo SR, D’Amato E, Notaro I. A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations. Drones. 2025; 9(2):127. https://doi.org/10.3390/drones9020127

Chicago/Turabian Style

Bassolillo, Salvatore Rosario, Egidio D’Amato, and Immacolata Notaro. 2025. "A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations" Drones 9, no. 2: 127. https://doi.org/10.3390/drones9020127

APA Style

Bassolillo, S. R., D’Amato, E., & Notaro, I. (2025). A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations. Drones, 9(2), 127. https://doi.org/10.3390/drones9020127

Article Metrics

Back to TopTop