Next Article in Journal
Comparison of Machine Learning Pixel-Based Classifiers for Detecting Archaeological Ceramics
Previous Article in Journal
Digital Strategies to Enhance Cultural Heritage Routes: From Integrated Survey to Digital Twins of Different European Architectural Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Single-Anchor Cooperative Positioning Method Based on Optimized Inertial Measurement for UAVs

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Drones 2023, 7(9), 577; https://doi.org/10.3390/drones7090577
Submission received: 21 August 2023 / Revised: 10 September 2023 / Accepted: 11 September 2023 / Published: 13 September 2023

Abstract

:
Benefiting from its structural simplicity and low cost, the inertial/ranging integrated navigation system is widely utilized in multi-agent applications, particularly in unmanned aerial vehicles (UAVs). As the deployment of UAVs in complex environments becomes more prevalent, accurate positioning in sparse observation scenarios has become increasingly important. In satellite-denied environments with few anchors, traditional filtering methods for positioning suffer from poor effectiveness due to the lack of constraints. This article proposes a method to enhance positioning accuracy in such environments by optimizing the inertial outputs of each UAV. The optimization process is based on the range measurements between the UAVs and a single anchor. By solving the optimization function derived using Bayesian theory, the optimized inertial outputs of the UAVs can be obtained. These optimized inertial data are then used in place of the original measurements for position estimation in the filter, resulting in improved performance. Simulation and real-world experiments validate that the proposed method can enhance UAVs’ positioning accuracy in single-anchor environments, surpassing the performance of a single optimizer or filter. Furthermore, the positions estimated by cooperative agents demonstrate higher accuracy than those estimated by individual agents, as more ranging measurements are incorporated.

1. Introduction

In recent years, multi-agent systems have found increasing applications in various fields such as military, agriculture, and commerce, especially for multi-UAV systems. One critical requirement in these systems is accurate positioning. The ranging-based positioning method has emerged as a common solution, thanks to the availability of cost-effective and high-efficiency range sensors. Among the range sensors commonly used in such systems, ultra-wideband (UWB) modules [1] have gained significant popularity. These modules have been extensively utilized in UAVs due to their high-ranging accuracy, anti-multipath capabilities, and low power consumption [2].
The agents equipped with UWB modules can receive range measurements from fixed anchors which have predetermined locations. These range measurements from the anchors play a crucial role in providing geometric constraints and approximate position estimations for the agents. This methodology is akin to the three-ball intersection positioning technique used for satellites. It proves particularly useful in environments where a Global Navigation Satellite System (GNSS) is unavailable or unreliable, such as indoor spaces like rooms [3], underground coal mines [4], underwater environments [5,6], and other similar settings. In many research studies on ranging-based positioning systems, the principles of graph optimization [7,8,9] or Kalman filtering [10,11] are commonly employed to track the position of the agent. In some cases, a sliding window approach [7] is also incorporated to enhance trajectory smoothness. These techniques enable accurate and reliable position tracking in multi-agent systems utilizing range measurements. However, relying on a single ranging source is typically inadequate for achieving precise positioning due to the inherent limitations of ranging, such as susceptibility to sudden changes or interruptions, as well as the low frequency of measurements.
Fusing the inertial sensor and the range sensor is a prevalent approach to enhancing positioning accuracy. The high-frequency inertial measurements include angular velocities and specific forces. The inertial calculation process can transform the measurements in the body frame to the position in the navigation frame. Due to the accumulation of inertial errors, the positioning accuracy continuously decreases over time, and the range measurements can provide constraints and eliminate the influence of errors. In contrast to utilizing range measurements alone, the integration of inertial sensors enables more precise estimation of the motion dynamics model of the system [12]. By incorporating inertial calculations, the integration process establishes a correlation between the motion states of both preceding and succeeding moments in time, and it allows for a more comprehensive prediction of the navigation state [13].
The positioning method that integrates UWB and Inertial Measurement Unit (IMU) shares significant similarities with simple UWB positioning techniques, primarily involving optimization and filtering methodologies. The optimization methods primarily rely on the graph optimization model [14]. The sliding window technique [15] is often employed to estimate relative positions at selected key points by combining distance measurements with acceleration estimates. The filtering methods are mainly based on the Kalman filter [16], which linearizes the nonlinear ranging to predict and update the navigation state of the agent. The current research on nonlinear filtering integrating UWB and IMU mainly focuses on Extended Kalman Filter (EKF) [12,17], Unscented Kalman Filter (UKF) [18,19], and Invariant Extended Kalman Filter (IEKF) [20]. These combined approach leverages the advantages of both range and inertial measurements to enhance the accuracy and robustness of the positioning system.
In a multi-agent system, agents equipped with IMUs can receive range measurements from not only the anchors but also other agents. This is a common pattern of cooperative positioning. The additional range measurements can enhance the positioning accuracy of cooperative agents [21,22,23], compared to the single-agent positioning method.
Whether it is single-agent positioning or multi-agent cooperative positioning, the majority of inertial-ranging positioning methods are built upon multi-anchor networks. These methods rely on the presence of open space and stable communication channels to facilitate ranging [24]. In practical application environments, there may not be enough anchors or the communication is sometimes restricted, and it is not always possible to receive enough signals from multiple anchors at the same time. Therefore, it is important to explore improved positioning methods in sparse anchor environments, where single anchor positioning [25] is the most fundamental case.
The traditional single-anchor positioning methods build virtual anchors for positioning [26,27]. Some methods also involve additional measurements such as velocities [28] and angles [29]. Recently, a geometry-based single-anchor positioning method introduced in [30] estimated the position only by inertial and range measurements. Another research proposed in [31] improved the accuracy of velocity estimation without the same sensors. Our group has also presented an optimization-based positioning method for single-anchor positioning [32]. Different from the other methods, we aim at the cooperation of the agents. The proposed method involves the range measurements between agents in the process of building constraints, which centrally improve the positioning accuracy of the agents, compared to the positioning of individual agents.
The main contributions of this paper include the following:
1.
We propose a cooperative positioning method in single-anchor environments based on range and inertial measurements. At the moments of range observation, an optimizer provides the optimized inertial data to a filter, and the filter outputs the position using the prior navigation state of the agent.
2.
An optimization method for inertial measurements in a multi-agent system is presented. The optimal estimation model is derived from Bayesian theory, and the optimal inertial measurements are obtained at the extreme points, whose existence is further discussed.
3.
The performance of the single-anchor cooperative positioning method is presented in a simulated multi-UAV environment and a real-world indoor environment. The impact of cooperation on improving the positioning accuracy is also discussed.
The rest of this paper is organized as follows. The preliminaries and sensor models are presented in Section 2. Section 3 introduces the process of cooperative positioning method, where we focus on the derivation of the optimization algorithm. Section 4 and Section 5 evaluate the simulation and real-world experimental results, respectively.

2. Preliminaries and Sensor Models

The sensors involved in the multi-agent system consist of inertial sensors and range sensors. Before the optimization, the error distribution of the sensor outputs need to be determined first.

2.1. Inertial Sensor Model

The composition of inertial errors is complex, and there are various approximate models for sensors with different accuracies. According to IEEE standards for inertial devices in [33], the main errors that may exist in inertial sensors include random walk, quantization noise, correlated (Markov) random drift, bias instability, and ramp instability. This classification of errors is detailed based on the characteristics of noise, but there is a significant difference in the magnitude of these errors. The classification of inertial errors based on static and dynamic components introduced in [34] is widely accepted for high-accuracy sensors, but it is not available for low-accuracy and medium-accuracy sensors.
Referring to the classification methods in [33,34], the classification of inertial errors in this paper is given in Figure 1.
As shown in Figure 1, the inertial errors can be divided into bias, installation error, and random noise.
  • The bias is a constant error exhibited by all accelerometers and gyroscopes. The bias comprises the fixed bias and the run-to-run bias: the former is the residual fixed bias remaining after sensor calibration, and the latter is the run-to-run variation of each instrument bias. The fixed bias is constant throughout an IMU operating period, while the run-to-run bias changes each turn-on time.
  • The installation error is caused by the deviations between actual axes and ideal axes, which contains the scale factor error and the cross-coupling error. The scale factor error is the departure of the input–output gradient of the instrument from unity following unit conversion by the IMU. The cross-coupling error arises from the misalignment of the sensitive axes of the inertial sensors with respect to the orthogonal axes of the body frame due to manufacturing limitations. Different from the bias, the installation error is independent from run to run, and it can be estimated by calibration in advance.
  • The random noise exhibited in the inertial sensors has several sources, such as electromagnetic interference, temperature variation, and mechanical vibration. The bias instability and temperature drift both belong to random noise. The spectrum of accelerometer and gyroscope noise for frequencies below 1 Hz is approximately white, so the standard deviation of the average specific force and angular velocity noise varies in inverse proportion to the square root of the averaging time. Inertial sensor noise is thus usually quoted in terms of the root power spectral density (PSD). The customary units [34] are μ g / Hz for accelerometer random noise, where
    1 μ g / Hz = 9.80665 × 10 6 m · s 1.5 ,
    and / h for gyroscope random noise, where
    1 / h = 2.909 × 10 4 rad · s 0.5 ,
    The standard deviations of the random noise samples are obtained by multiplying the corresponding root PSDs by the root of the sampling rate or dividing them by the root of the sampling interval. The white random noise cannot be calibrated and compensated as there is no correlation between past and future values.
    A commonly used method for quantifying random noise is Allan variance [35]. The Allan variance is a method of representing the root means square (RMS) random-drift error as a function of averaging time. By performing a simple operation on the entire length of static data, a characteristic curve is obtained, whose inspection facilitates the determination of different types and magnitude of error terms existing in the inertial sensors.
In our work, the inertial random noise is identified as Gaussian white noise, as the inertial sensors in the multi-agent system have low accuracy or medium accuracy. The standard deviations of angular velocity and specific force are independently and identically distributed in three orthogonal directions of body frame, as the accelerometers and gyroscopes are approximately independent. Let σ g denote the standard deviation of angular rate and σ a denote the standard deviation of specific force; they can be estimated by Allan variance before experiments.
In conclusion, the output of inertial sensor in our work can be modeled as follows:
The bias is denoted by b g for the gyroscope and b a for the accelerometer, and the random noise is denoted by w g for the gyroscope and w a for the accelerometer. The installation error can be obtained in the calibration and eliminated before the experiments. Therefore, the inertial random errors follow the Gaussian distribution with mean zero, as given by
w g N 0 , Σ g w a N 0 , Σ a ,
where Σ g and Σ a are the diagonal matrix of the variances σ g 2 and σ a 2 . The errors of IMU data can be modeled using a Gaussian distribution as follows:
δ ω i b b N b g , Σ g δ f b N b a , Σ a ,
in which δ ω i b b denotes the angular velocity error and δ f b denotes the specific force error.
Consequently, the output of a inertial sensor can be expressed by the following:
ω ˜ i b b = ω i b b + δ ω i b b = ω i b b + b g + w g f ˜ b = f b + δ f b = f b + b a + w a ,
where ω ˜ i b b and f ˜ b are measurements, and ω i b b and f b are the unbiased truth-value of angular velocity and specific force.
In the multi-agent system composed of N agents with inertial sensors, the inertial outputs can be represented in the form of sets. Let W N denote the set of the unbiased angular velocity ω i b b , F N denote the set of the unbiased specific force f b , W ˜ N denote the set of the measured angular velocity ω ˜ i b b , and F ˜ N denote the set of the measured specific force f ˜ b , where the subscript N represents the number of the agents. These sets are given by
W N = ω i b , 1 b , ω i b , 2 b , , ω i b , N b ,
F N = f 1 b , f 2 b , , f N b ,
W ˜ N = ω ˜ i b , 1 b , ω ˜ i b , 2 b , , ω ˜ i b , N b ,
F ˜ N = f ˜ 1 b , f ˜ 2 b , , f ˜ N b ,
where the numbers 1 to N in the subscripts indicate which agent the corresponding output comes from.

2.2. Range Sensor Model

Similar to the expression of inertial measurement in (5), the range measurement R ˜ is given by
R ˜ = R + δ R ,
where R denotes the real range and δ R denotes the ranging error. The ranging error follows the Gaussian distribution as
δ R N 0 , σ R 2 ,
where σ R denotes the ranging standard deviation.
There are two types of ranges in the multi-agent system: one is the range between an agent and a fixed anchor; the other is the range between two different agents. These two types of ranging can be distinguished by subscripts. R ˜ 1 denotes the measured ranging between agent 1 and the single anchor. R ˜ 12 denotes the measured ranging between agent 1 and agent 2, which is received and measured by agent 1.
A lot of range sensors including UWB utilize the technology of time difference of arrival (TDOA). According to the principle of TDOA [36], the range measurements of two nodes participating in transmission and reception simultaneously are different, due to the slight difference in round-trip times. This principle indicates that R ˜ 12 is different from R ˜ 21 . So, there are N 2 range measurements in total every measurement time, with N range measurements between agents and the single anchor, and N 2 N range measurements between different agents, as given by
R ˜ N = R ˜ 1 , R ˜ 2 , , R ˜ N , R ˜ 12 , R ˜ 21 , , R ˜ N 1 , N , R ˜ N , N 1 ,
where N in the subscripts indicates the number of the agents.
The sensors equipped on each agent can receive the measurements, as the UWB module and MEMS IMU show in Figure 2. Due to the lack of UAVs, we replaced them with unmanned ground vehicles (UGVs) for demonstration. The measurements of angular velocity, specific force, and range update the sets in (8), (9), and (11), which form the inputs for subsequent positioning estimation process.

3. Inertial Optimization Algorithm

3.1. Overview

The method of optimizing inertial data using inertial and range measurements is discussed in this section. The optimization process is shown in Figure 3. Firstly, we find the relationship between the inertial measurement and the range measurement, which helps to fuse the sensor outputs. Secondly, the optimization model is designed based on Bayesian theory, and the optimization equation is derived. Thirdly, we solve the optimization function, and the inertial data at the extreme point are the optimal solution. Fourthly, we discuss some issues on the extreme point to confirm that the calculated stationary point is exactly the point to be solved.

3.2. Range Calculated Using Inertial Output

The fusion of inertial and range sensors is based on the consistent representation of outputs. So, the first step of optimization is using the inertial outputs to represent the ranges, which is based on the inertial calculation process [31]. The inertial calculation process indicates the update of the agent’s attitude, velocity, and position, using the angular velocities and the specific forces output by the inertial sensors.
The inertial sensors in the inertial navigation system (INS) include three orthogonal gyroscopes and three orthogonal accelerometers. In this paper, the coordinate direction in navigation frame n is defined as East–North–Up, while the direction in body frame b is Right–Ahead–Up. The INS calculated the attitude matrix (or direction cosine matrix, DCM) C b n , velocity v n , and position p n of the carrier using the angular velocity ω i b b and specific force f b output by the inertial sensors. Specifically, C b n denotes the attitude matrix from body frame to navigation frame, v n and p n are in the navigation frame, ω i b b and f b are in the body frame, while i b indicates the rotation direction from the Earth-centered inertial frame i to the body frame. The inertial outputs ω i b b and f b here are written as a truth value, but they are actually measurements with errors.
In a update interval, the INS updates the attitude, velocity, and position in sequence. The velocity in the navigation frame is defined as
v n = v E , v N , v U T ,
where v E , v N , and v U denote the eastward, northward, and upward projections, respectively. The position is defined as
p n = L , λ , h T ,
where L denotes latitude, λ denotes longitude, and h denotes the height above the geoidal surface.
The inertial update process includes calculations related to the rotation of the Earth. Suppose that ω i e is the rotational angular rate of the Earth; then, ω i e n is the rotational angular velocity of the Earth frame with respect to the inertial frame and ω e n n is the rotational angular velocity of the navigation frame with respect to the Earth frame, which can be computed as follows:
ω i e n = 0 , ω i e cos L , ω i e sin L T ,
ω e n n = v N R M + h , v E R N + h , v E tan L R N + h T ,
where R M and R N are, respectively, the meridian radius of curvature and the transverse radius of curvature of the WGS-84 reference ellipsoid.
The inertial calculation process follows the sequence of attitude update, velocity update, and position update:
  • Attitude Update Process: In the navigation frame, the differential equation [13] of attitude matrix has the following form:
    C ˙ b n = C b n ω n b b × ,
    where ω n b b × is the antisymmetric matrix of body angular velocity with respect to the navigation frame. The differential equation can be transformed to a function of ω i b b as
    C ˙ b n = C b n ω i b b × C b n ω i n b × = C b n ω i b b × ω i n n × C b n ,
    where ω i n n is calculated as ω i n n = ω i e n + ω e n n . Solving the differential equation in (17), the attitude matrix at time t k can be derived by the attitude matrix at time t k 1 as
    C b n t k = C b n t k 1 exp t k 1 t k ω n b b × d t .
    Considering the small rotation angle in the interval of integration, the attitude matrix in (14) can be approximated by the Taylor expansion as
    C b n t k C b n t k 1 I + ω n b b t k × = C b n t k 1 + C b n t k 1 ω i b b t k × ω i n n t k × C b n t k 1 .
  • Velocity Update Process: In the navigation frame, the specific force is given by
    f n = C b n f b ,
    and the differential equation [13] of velocity has the following form:
    v ˙ n = f n 2 ω i e n + ω e n n × v n + g n ,
    where g n is the gravity vector, which is considered to be a constant vector when moving in a small area.
    Solving the differential equation in (22), the velocity at time t k can be derived by the velocity at time t k 1 as
    v n t k = v n t k 1 + v ˙ n t k 1 Δ t = v n t k 1 + f n t k Δ t 2 ω i e n t k + ω e n n t k × v n t k 1 Δ t + g n Δ t ,
    where Δ t denotes the time duration of the update interval t k 1 , t k . For low-precision inertial navigation system, the equation can be approximated as
    v n t k v n t k 1 + f n t k Δ t + g n Δ t .
  • Position Update Process: In the navigation frame, the differential equation [37] of position has the following form:
    p ˙ n = R c v n ,
    where R c is the local curvature matrix as given by
    R c = 0 1 R M + h 0 1 R N + h cos L 0 0 0 0 1 ,
    Since L has little change in an update interval and h is much smaller than R M and R N , R c can be considered to be a constant matrix. Solving the differential equation in (R18), the position at time t k can be derived by the position at time t k 1 as
    p n t k = p n t k 1 + R c v n t k Δ t .
    Substituting (24) into (27) yields
    p n t k = p n t k 1 + R c v n t k 1 Δ t + R c f n t k Δ t 2 + R c g n Δ t 2 = C + R c f n t k Δ t 2 ,
    where C denotes the constant term as
    C = p n t k 1 + R c v n t k 1 Δ t + R c g n Δ t 2 .
    Substituting (20) and (21) into (28) yields
    p n t k = C + R c C b n t k f b t k Δ t 2 = C + R c C b n t k 1 f b t k Δ t 2 + R c C b n t k 1 ω i b b t k × f b t k Δ t 3 R c ω i n n t k × C b n t k 1 f b t k Δ t 3 ;
    then, the updated position p n t k can be modeled as a function of the angular velocity ω i b b t k and the specific force f b t k .
Suppose that there are node A and node B, which can be fixed anchors or moving agents. The range between node A and node B at time t k is given by the L2 norm:
| | p A n t k p B n t k | | ,
where the subscripts A and B indicate the node of A and B. Normally, p A n and p B n are indicated by (14) with latitude, longitude, and height. For the range calculated using inertial sensors, p A n and p B n can be directly calculated using (30).

3.3. Optimization Model for Inertial Measurements

In this subsection, we introduce the centralized optimization method of multi-agent inertial data. As the outputs of inertial sensors are approximated to the Gaussian distribution in Section 2.2, the estimated inertial data can be expressed in the form of conditional probability, which is given by
P ( W N , F N | W ˜ N , F ˜ N , R ˜ N ) ,
where W N and F N indicate the sets of estimated inertial data.
When the conditional probability in (32) takes the maximum value, the corresponding estimated inertial data in sets W N and F N can be considered to be the optimal estimation. In this way, the optimization model can be built using a max-value function, as given by
arg max W N , F N F W N , F N = arg max W N , F N P ( W N , F N | W ˜ N , F ˜ N , R ˜ N ) .
According to the Bayesian formula, the model in (33) can be written in the form of prior information as
arg max W N , F N F W N , F N = arg max W N , F N P ( W ˜ N , F ˜ N , R ˜ N | W N , F N ) P W N , F N P W ˜ N , F ˜ N , R ˜ N .
The measurements in sets W ˜ N , F ˜ N , and R ˜ N are unchanged in an observation time interval, so the prior probability P ( W ˜ N , F ˜ N , R ˜ N ) can be regarded as a constant. Then, the optimization model can be written as
arg max W N , F N F W N , F N = arg max W N , F N P ( W ˜ N , F ˜ N , R ˜ N | W N , F N ) P W N , F N .
Since the inertial measurements follow the Gaussian distribution, the prior probability P W N , F N can be regard as a constant, and the model is further simplified to the maximum likelihood estimation (MLE) as
arg max W N , F N F W N , F N = arg max W N , F N P ( W ˜ N , F ˜ N , R ˜ N | W N , F N ) .
The sensors involved are independent from each other, so each measurement in sets W ˜ N , F ˜ N , and R ˜ N is only related to its corresponding optimization variable, and the conditional probability in (36) can be written as
P ( W ˜ N , F ˜ N , R ˜ N | W N , F N ) = P W ˜ N | W N P F ˜ N | F N P R ˜ N | W N , F N = P ω ˜ i b , 1 b | ω i b , 1 b P ω ˜ i b , N b | ω i b , N b P f ˜ 1 b | f 1 b P f ˜ N b | f N b P R ˜ 1 | ω i b , 1 b , f 1 b P R ˜ 2 | ω i b , 2 b , f 2 b P R ˜ N | ω i b , N b , f N b P R ˜ 12 | ω i b , 1 b , f 1 b , ω i b , 2 b , f 2 b P R ˜ 21 | ω i b , 1 b , f 1 b , ω i b , 2 b , f 2 b P R ˜ N 1 , N | ω i b , N b , f N b , ω i b , N 1 b , f N 1 b P R ˜ N , N 1 | ω i b , N b , f N b , ω i b , N 1 b , f N 1 b ,
where the multiplied conditional probabilities can be divided into four categories, as shown in Table 1.
On the basis of the Gaussian distribution models in (4), (5), (10), and (11), these conditional probabilities can be expressed by Gaussian probability density functions as
P ω ˜ i b , A b | ω i b , A b = 1 2 π 3 2 Σ g 1 2 exp 1 2 ω ˜ i b , A b ω i b , A b b g T g 1 ω ˜ i b , A b ω i b , A b b g = 1 2 π 3 2 Σ g 1 2 exp 1 2 σ g 2 ω ˜ i b , A b ω i b , A b b g 2 ,
P f ˜ A b | f A b = 1 2 π 3 2 Σ a 1 2 exp 1 2 f ˜ A b f A b b a T a 1 f ˜ A b f A b b a = 1 2 π 3 2 Σ a 1 2 exp 1 2 σ a 2 f ˜ A b f A b b a 2
P R ˜ A | ω i b , A b , f A b = 1 2 π 1 2 σ R exp 1 2 σ R 2 R ˜ A R A 2 = 1 2 π 1 2 σ R exp 1 2 σ R 2 R ˜ A p A n p 0 n 2 ,
P R ˜ A B | ω i b , A b , f A b , ω i b , B b , f B b = 1 2 π 1 2 σ R exp 1 2 σ R 2 R ˜ A B R A B 2 = 1 2 π 1 2 σ R exp 1 2 σ R 2 R ˜ A B p A n p B n 2 ,
where p 0 n denotes the position of the single anchor in the navigation frame.
Substituting (37)–(41) into (36) and taking the logarithm, the optimization model can be written as
arg max W N , F N F W N , F N = arg max W N , F N A = 1 N P ω ˜ i b , A b | ω i b , A b · A = 1 N P f ˜ A b | f A b · A = 1 N P R ˜ A | ω i b , A b , f A b · A = 1 , B = 1 , A B A = N , B = N P R ˜ A B | ω i b , A b , f A b , ω i b , B b , f B b = arg max W N , F N A = 1 N exp 1 2 σ g 2 ω ˜ i b , A b ω i b , A b b g 2 · A = 1 N exp 1 2 σ a 2 f ˜ A b f A b b a 2 · A = 1 N exp 1 2 σ R 2 R ˜ A p A n p 0 n 2 · A = 1 , B = 1 , A B A = N , B = N exp 1 2 σ R 2 R ˜ A B p A n p B n 2 = arg max W N , F N A = 1 N 1 2 σ g 2 ω ˜ i b , A b ω i b , A b b g 2 + A = 1 N 1 2 σ a 2 f ˜ A b f A b b a 2 + A = 1 N 1 2 σ R 2 R ˜ A p A n p 0 n 2 + A = 1 , B = 1 , A B A = N , B = N 1 2 σ R 2 R ˜ A B p A n p B n 2
Therefore, the optimal estimation of inertial measurements W N and F N is transformed into solving the extreme points of the function F W N , F N .

3.4. Solution of Optimal Inertial Measurements

Solving the extreme points or the stationary points of function F W N , F N is the same as solving the following equations of first-order partial derivative:
F W N , F N ω i b , 1 b = 0 , . . . , F W N , F N ω i b , N b = 0 ,
F W N , F N f 1 b = 0 , . . . , F W N , F N f N b = 0 ,
Suppose A = 1 , 2 , . . . , N , and B = 1 , 2 , . . . , N , every angular velocity ω i b , A b and specific force f A b are 3 × 1 column vectors, and every 0 is a 1 × 3 row vector. The function F W N , F N is a scalar according to (42), so there are 6 N nonlinear equations in total. To simplify the derivative calculation, we divide the function F W N , F N into four subfunctions:
F W N , F N = F 1 + F 2 + F 3 + F 4 ,
where
F 1 = A = 1 N 1 2 σ g 2 ω ˜ i b , A b ω i b , A b b g 2 ,
F 2 = A = 1 N 1 2 σ a 2 f ˜ A b f A b b a 2 ,
F 3 = A = 1 N 1 2 σ R 2 R ˜ A p A n p 0 n 2 ,
F 4 = A = 1 , B = 1 , A B A = N , B = N 1 2 σ R 2 R ˜ A B p A n p B n 2 .
Then, the function F W N , F N can be split into four functions for derivation. For the agent A, partial derivatives of the function versus ω i b , A b are given by
F W N , F N ω i b , A b = F 1 ω i b , A b + F 2 ω i b , A b + F 3 ω i b , A b + F 4 ω i b , A b ,
F W N , F N f A b = F 1 f A b + F 2 f A b + F 3 f A b + F 4 f A b .
The functions F 1 and F 2 are only related to the angular velocity ω i b , A b or the specific force f A b of the corresponding agent A, so the derivatives of F 1 and F 2 can be directly given by
F 1 + F 2 ω i b , A b = F 1 ω i b , A b = 1 σ g 2 ω ˜ i b , A b ω i b , A b b g ,
F 1 + F 2 f A b = F 2 f A b = 1 σ a 2 f ˜ A b f A b b a ,
The derivative of function F 3 is related to both the angular velocity ω i b , A b and the specific force f A b of the corresponding agent A. So, the key of the derivative calculation of F 3 is the expression of ω i b , A b and f k b regarding p A n , which is derived in Section 3.2 and shown in (30). Taking the partial derivative of p A n versus ω i b , A b and f A b using the matrix derivative rules in [38], we have
p A n T ω i b , A b = f A b × C n b R c T Δ t 3 ,
p A n T f A b = C n b R C T Δ t 2 ω i b , A b × C n b R C T Δ t 3 + C n b R C T Δ t 3 ,
where the parameters are in the time interval t k 1 , t k .
According to (48), the L2 norm p A n p 0 n in function F 3 can be simplified as
p A n p 0 n = p A n p 0 n T p A n p 0 n = p A n T p A n 2 p A n T p 0 n + p 0 n T p 0 n .
Therefore, the partial derivative of F 3 versus ω i b , A b is calculated as
F 3 ω i b , A b = R ˜ A p A n p 0 n σ R 2 · p A n p 0 n ω i b , A b = R ˜ A p A n p 0 n σ R 2 p A n p 0 n · p A n T ω i b , A b p A n p 0 n ,
and the partial derivative of F 3 versus f k b is calculated as
F 3 f A b = R ˜ A p A n p 0 n σ R 2 · p A n p 0 n f A b = R ˜ A p A n p 0 n σ R 2 p A n p 0 n · p A n T f A b p A n p 0 n ,
Substituting (54) into (57) and (55) into (58), we obtain the partial derivatives as
F 3 ω i b , A b = R ˜ A p A n p 0 n σ R 2 p A n p 0 n · f A b × C n b R c T p A n p 0 n Δ t 3 ,
F 3 f A b = R ˜ A p A n p 0 n σ R 2 p A n p 0 n · C n b R C T 1 Δ t ω i b , A b × C n b R C T + C n b R C T p A n p 0 n . Δ t 3 .
The derivative of function F 4 is related to all the angular velocities and the specific forces of the N agents, which includes N 1 terms for each agent, and F 4 can be rewritten as
F 4 = 1 2 σ R 2 A = 1 N j = 1 , j A N R ˜ A j p A n p j n 2 + R ˜ j A p A n p j n 2 .
According to the derivation of F 3 , the partial derivative of F 4 versus ω i b , A b and f A b is calculated as
F 4 ω i b , A b = j = 1 , j A N R ˜ A j p A n p j n σ R 2 · p A n p j n ω i b , A b + R ˜ j A p A n p j n σ R 2 · p A n p j n ω i b , A b = j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n · p A n T ω i b , A b p A n p j n = j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n · f A b × C n b R c T p A n p j n Δ t 3 ,
F 4 f A b = j = 1 , j A N R ˜ A j p A n p j n σ R 2 · p A n p j n f A b + R ˜ j A p A n p j n σ R 2 · p A n p j n f A b = j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n · p A n T f A b p A n p j n = j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n C n b R C T 1 Δ t ω i b , A b × C n b R C T + C n b R C T p A n p 0 n Δ t 3 .
Substituting (52), (59), and (62) into (50) and (53), (60), and (63) into (51), we obtain the partial derivatives of function F W N , F N to agent A versus ω i b , A b and f A b , as given by
F W N , F N ω i b , A b = 1 σ g 2 ω ˜ i b , A b ω i b , A b b g + R ˜ A p A n p 0 n σ R 2 p A n p 0 n · f A b × C n b R c T p A n p 0 n Δ t 3 + j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n · f A b × C n b R c T p A n p j n Δ t 3 ,
F W N , F N f A b = 1 σ a 2 f ˜ A b f A b b a + R ˜ A p A n p 0 n σ R 2 p A n p 0 n · C n b R C T 1 Δ t ω i b , A b × C n b R C T + C n b R C T p A n p 0 n Δ t 3 + j = 1 , j A N R ˜ A j p A n p j n + R ˜ j A p A n p j n σ R 2 p A n p j n C n b R C T 1 Δ t ω i b , A b × C n b R C T + C n b R C T p A n p j n Δ t 3 .
Substituting (64) into (43) and (65) into (44), the 6 N nonlinear equations are expressed by the inertial outputs in sets W N and F N ; then, the estimated inertial outputs can be obtained by solving the nonlinear equations.

3.5. Discussion on Extreme Points

The optimal estimation of inertial data is solved at the extreme points of function F W N , F N , as shown in (43) and (44). This method is based on the assumption that the stationary points solved by the first-order partial derivative is exactly the extreme points.
When solving for a stationary point of a function in practical applications, it is typically assumed to be either a maximum or minimum point. However, this assumption is not rigorous. Therefore, several issues about the above method of obtaining the stagnation point by solving the nonlinear equation need to be discussed:
  • Does the system of nonlinear equations have a solution?
  • Is the solution of the system of nonlinear equations unique?
  • Does the obtained stationary point correspond to the extreme point?
  • Is the extreme point the maximum point?
Issue 1 can be demonstrated with the method of solving the nonlinear equations system in (43) and (44). Because it is a nonlinear least square problem, it can be solved by iteration using optimization methods such as gradient descent. In this way, an initial value for the optimization variable needs to set before the optimization, which can be set as the original IMU data ω ˜ i b b and f ˜ b . If there are no ranging data or the optimal solution in this time interval, no iteration will be performed. That is to say, there is undoubtedly a solution to the equation, and the solution may be the original IMU data or the optimization variable after iteration.
The other issues are common to many extreme value problems, which can be explained by the concept of exponential family [39]. The exponential family distribution of x, which indicates the IMU data ω ˜ i b b and f ˜ b , with the prior parameter θ R d is defined as follows:
p x | θ = h x exp η θ T T x A θ ,
where h x , η θ , T x and A θ are all known functions to a certain distribution. The Gaussian distribution with mean value μ and variance σ 2 can be proved to belong to the exponential distribution family:
p x | μ , σ = 1 2 π σ exp x μ 2 2 σ 2 = 1 2 π exp 1 2 σ 2 x 2 + μ σ 2 x μ 2 2 σ 2 log σ ,
where h x = 1 2 π σ , η μ , σ = 1 2 σ 2 , μ σ 2 T , T x = x 2 , x T and A μ , σ = μ 2 2 σ 2 + log σ . And, it is similar to the Gaussian probability density functions in (38)–(41), which enable the Gaussian distribution model discussed above to belong to the exponential distribution family.
The key to determining the extreme point is to discuss the concavity of the log partition function A θ . The Holder inequality and the Hessian matrix is used in [39], both of which prove that A θ is a concave function.
The maximum likelihood problem is defined as an estimation of θ to solve the maximum of
L θ | X = log p x | θ X = i = 1 n p x i | θ .
with a given data set X = x 1 , x 2 , , x n and a distribution family to be fitted, p x | θ . If p x | θ belongs to exponential distribution family, the likelihood function can be written as follows:
L θ | X = log h X + η θ T i = 1 n x i n A θ .
where h X = i = 1 n h x i is a constant. A θ is a concave function about θ ; thus, A θ is a convex function about θ . θ T i = 1 n x i is a linear function of θ . So, their sum L θ | X is a convex function about θ . That is to say, if there is a solution when the derivative equals zero, it must be the only one, and the corresponding function value is the maximum. In conclusion, Issues 2–4 are explained by the exponential distribution family, and the stationary point is calculated with the nonlinear equations in (43) and (44).
Therefore, the stationary points solved by (43) and (44) are exactly the extreme points, and the optimal estimation of the inertial measurements can be obtained at these extreme points.

4. Single-Anchor Cooperative Positioning Method

Based on the inertial optimization algorithm, we design a cooperative positioning method for multi-agent system in single-anchor environments. The framework of the single-anchor cooperative positioning method is depicted in Figure 4, and we present the design ideas of the method below.
Different from the common filtering method for inertial-ranging positioning, we add an optimizer to the inertial sensors at the ranging time, which replaces the inertial measurement W ˜ N and F ˜ N with the optimal estimation W N and F N .
After the optimization, the attitude, velocity, and position can be updated using the inertial calculation, and the range calculated using the inertial data can be computed as shown in (31). However, the updated position does not differ significantly from the position calculated using the original inertial data. This can be attributed to the fact that estimated inertial data lack prior information for the optimization process, as the inertial output is independently and identically distributed within each time interval, and the ranging rate is lower than the inertial rate.
In order to further improve the positioning accuracy and to make full use of the measurements, a filter is employed after the optimizer to use the prior position for navigation. The range measurement R ˜ N is also acquired in the filter to be the observation. The EKF is selected considering its high efficiency and convenience for solving nonlinear problems, and the filtering process is formulated below.
For a centralized multi-agent nonlinear system, the discretized model’s time interval t k 1 , t k is given by the following:
x k = f k 1 x k 1 + w k 1 z k = H k x k + v k ,
where x k denotes the navigation state error vector, which includes the attitude θ , γ , φ T , velocity v E , v N , v U T , and position L , λ , h T of all the agents in navigation frame, and the error state x k , A for agent A can be given by the following:
x k , A = δ θ k , A , δ γ k , A , δ φ k , A , δ v E k , A , δ v N k , A , δ v U k , A , δ L k , A , δ λ k , A , δ h k , A T ,
w k 1 N 0 , Q k 1 is the Gaussian process noise. Q k is the diagonal matrix of process noise cross-covariance matrices, and the process noise cross-covariance matrix for agent A is given by the following:
Q k , A = cov w k , A w k , A T = Σ g 2 0 3 × 3 0 3 × 3 0 3 × 3 Σ a 2 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 .
z is the measurement vector, which is the numerical difference between the range calculated using the inertial data and the range measured by the range sensors, as given by R N R ˜ N or
z k = R 1 R ˜ 1 R 2 R ˜ 2 R N R ˜ N R 12 R ˜ 12 R 21 R ˜ 21 R N 1 , N R ˜ N 1 , N R N , N 1 R ˜ N , N 1 = H δ x + ν .
ν k N 0 , R k is the Gaussian measurement noise. R k = σ R 2 is the observation noise cross-covariance matrix, and the dimension here is 1 × 1 for a single-beacon measurement. H is the measurement matrix. f is the state-transition matrix of all agents, and the state-transition matrix for agent A is given by the following:
f = M a a M a v M a p M v a M v v M v p M p a M p v M p p ,
where the block matrices in (74) are calculated using the attitude, velocity, and position updated by the inertial calculation process, as shown in Appendix A.
The prediction and correction steps of EKF are given by the following:
x ^ k | k 1 = f k 1 x ^ k 1 | k 1 ,
p k | k 1 = f k 1 p k 1 | k 1 f k 1 T + Q k ,
K k = p k | k 1 H k T H k p k | k 1 H k T + R k ,
x ^ k | k = x ^ k | k 1 + K k z k H k x ^ k | k 1 ,
p k | k = I K k H k p k | k 1 ,
where p denotes the cross-covariance matrix and I denotes the identity matrix.
The outputs of the filter are the updated attitude error, velocity error, and position error, which are added to the input of the filter and form the updated attitude, velocity, and position. Further, the whole process of the single-anchor cooperative positioning method is summarized as Algorithm 1.
Algorithm 1 Single-anchor cooperative positioning method using inertial measurements
1:
Input: The sensor characteristics with b g , σ g for gyroscope, b a , σ a for accelerometer, and σ R for range sensor; the prior navigation state of time t k 1 .
2:
loop
3:
   if there are inertial measurements W ˜ N , F ˜ N in t k  then
4:
     if there are range measurements R ˜ N in t k  then
5:
        Optimization: Estimate the optimal inertial data in sets W ˜ N and F ˜ N ;
6:
        Filtering: Update the position using the measurements and the prior information. (Various filtering methods are optional, the EKF is selected in this paper for its convenience.)
7:
     else
8:
        Update the attitude, velocity and position by inertial calculation process;
9:
     end if
10:
  end if
11:
end loop
12:
Output: The updated position in t k .

5. Simulation Experiment

To demonstrate the effectiveness of the proposed single-anchor cooperative positioning method, the simulation experiment is carried out. We build a four-agent single-anchor simulation environment and analyze the difference on positioning accuracy between the proposed method, the single filtering, and the single optimization.

5.1. Model of Simulation Environment

A simulated scenario of four UAVs and a single anchor is built in this paper. The horizontal and three-dimensional projections of the scenario are shown in Figure 5. Each UAV starts moving simultaneously from rest in the same horizontal plane, with a vertical acceleration of 2.5 m/s 2 or a horizontal acceleration of 2 m/s 2 . When moving at a constant speed, the velocity in the vertical direction is 5 m/s and the velocity in the horizontal direction is 2 m/s.
To facilitate the display of the moving trajectories of each agent, the navigation frame was converted to a local coordinate frame with the midpoint of the trajectory as the origin point [ 0 , 0 , 0 ] . The coordinates of the anchor and the starting points of agents in the local coordinate frame are shown in Table 2 with the unit in meter.
The four moving agents are equipped with inertial sensors and range sensors. The parameters of the simulated sensors are shown in Table 3, where the sensor outputs follow the Gaussian distribution discussed in Section 2.2. The inertial sensors on four agents have the same parameters, and the range sensors on the anchor and four agents also have the same parameters.
The simulated inertial data are output by the trajectory generator, as shown in Figure 6.

5.2. Simulation Results and Analysis

The single-anchor cooperative positioning method is a combination of optimization and filtering processes, and is presented in Section 4 and shown in Figure 4. Compared to the proposed method, both pure filtering and pure optimization have obvious divergence on the trajectories, which are shown in Figure 7. As the altitude direction of inertial calculation is divergent, we also conducted comparisons on whether there are height constraints.
As shown in Figure 7a,b, the positioning results of pure optimization have significant divergence compared to real trajectories. The reason is that the estimated inertial data cannot provide prior information, as discussed in Section 4. Due to the severe divergence of pure optimization itself, the difference in whether there are high constraints is not significant.
As shown in Figure 7c,d, the positioning results of pure filtering also deviate from the real trajectories. If there is no height constraint, as only one anchor is used for ranging, the agents move near the circle with the anchor as the center. When there are height constraints, the agents still move near the arc, with the anchor as the center, but the divergence is not as pronounced compared to when there is no height constraint. Moreover, the constraint effect caused by the range measurements between nodes is evident with the height constraints.
If there is no cooperation among the agents, which means there is no communication of the ranging R ˜ 12 , R ˜ 21 , , R ˜ N 1 , N , R ˜ N , N 1 between the agents, the trajectories of the agents solved using the single-anchor positioning method is shown in Figure 8.
This positioning result in Figure 8 is equivalent to each agent being individually positioned relative to the anchor using single-anchor positioning. The observation only includes the range measurements between four agents and the anchor, and the measurement vector in the filter is simplified to
z = R 1 R ˜ 1 R 2 R ˜ 2 R 3 R ˜ 3 R 4 R ˜ 4 = H δ x + ν .
There is no height constraint in the single-anchor positioning method, and it is obvious that when relying solely on range measurements from a single anchor, the trajectory diverges significantly in the vertical direction. Regardless, the trajectories in Figure 8 are much closer to the real trajectories than those solved by pure optimization or pure filtering.
If we add the range measurements R ˜ 12 , R ˜ 21 , , R ˜ N 1 , N , R ˜ N , N 1 between the agents to form the cooperative system, the results of the single-anchor cooperative positioning method is shown in Figure 9. The measurement vector here is the vector in (73).
The positioning results of two single-anchor positioning methods are presented above: one only utilizes single anchor observations, while the other integrates observations between agents for cooperation. The components of positioning errors between the estimated position and the real position in three directions are shown in Figure 10. The directions are East, North, and Up in the navigation frame. It is obvious that the cooperation of the agents improves the positioning accuracy.
The positioning results can be quantitatively analyzed using root mean square error (RMSE), as given by
R M S E = 1 n i = 1 n p ^ i p i 2 ,
where n denotes the number of time intervals, p ^ i denotes the estimated position, and p i denotes the real position. The RMSE of the positioning results are shown in Table 4. The table indicates that the positioning error in each direction of each agent decreases with the introduction of cooperation, and this is reflected in the overall decrease in total error.
Therefore, the simulation experiment shows that the single-anchor cooperative positioning method proposed in this paper has higher positioning accuracy compared to individual optimization, individual filtering, or the non-cooperative “optimization + filtering” method.

6. Real-World Experiment

To further validate the effectiveness of the single-anchor cooperative positioning method, we conducted a real-world experiment using three agents, which are shown in Figure 2. Due to the limitation of the equipment, we replace the UAVs with UGVs, but the equipped sensors are also applicable to UAVs.

6.1. Real-World Experimental Environment

The experiment was conducted indoors to create a GNSS-denied environment, and the satellite imagery and sketch map are shown in Figure 11.
The UGVs are equipped with IMUs and UWBs, and the parameters of the sensors are given in Table 5. The vehicles started moving from stationary, with the starting points shown in Figure 11b, and the agents were aligned while stationary for accurate initial states.

6.2. Real-World Results and Analysis

The three UGVs moving along the corridor maintain communication with each other, as shown in Figure 12a. Due to the rejection of satellite signals, the odometer and other anchors were used for integrated navigation to construct the reference trajectory of the agents, and the reference trajectory of Agent 1 is shown in Figure 12b. The trajectories solved by pure filtering are shown in Figure 12c, which have significant divergence from the simulation experiment.
Similar to the simulation experiment, we estimate the position using two single-anchor positioning methods, and the components of positioning errors between the estimated position and the real position are shown in Figure 13. Additionally, the RMSE of the positioning results are shown in Table 6.
Although the correction effect of the eastward position and northward position is not as evident as the cooperative method in the simulation experiment, the vertical positioning error noticeably reduces when the agents are taken into consideration. Meanwhile, compared to the method without cooperation, the positioning error of the cooperative method when stationary is more stable.
The unexpected performance of the cooperative method may due to the error in the integrated navigation of the reference trajectories. Nevertheless, the positions estimated using the proposed single-anchor positioning method are much more accurate than those estimated by pure filtering.

7. Conclusions

This paper’s novel contribution is proposing a single-anchor cooperative positioning method for a multi-UAV system. The inertial measurements from the agents are optimized by the range measurements between the UAVs and the anchor. The process of Bayesian estimation is derived in detail with the model of sensors and the principle of inertial calculation. Simulation and real-world experiments are carried out, and the effectiveness of the method is proved. Although the estimated position cannot be completely consistent with the real one, it is enough in the case of insufficient sensor measurements. More experiments are required in future work to certify that the method is effective for more extensive ranges and agents.

Author Contributions

Conceptualization, J.Y.; Formal analysis, J.Y.; Funding acquisition, K.T.; Investigation, J.Y. and Y.G.; Methodology, J.Y.; Resources, Y.G.; Software, J.Y.; Supervision, Y.G. and K.T.; Visualization, Y.G.; Writing—original draft, J.Y.; Writing—review and editing, Y.G. and K.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National Natural Science Foundation of China (62103424).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this paper.

Appendix A

The block matrices of state-transition matrix in (74) are calculated using the navigation parameters and the attitude, velocity, and position updated following the inertial calculation process, as shown below:
M a a = ω i n n × M a a 1 , 1 = M a a 2 , 2 = M a a 3 , 3 = 0 M a a 1 , 2 = ω i e sin L + v E R N + h tan L M a a 1 , 3 = ω i e cos L v E R N + h M a a 2 , 1 = ω i e sin L v E R N + h tan L M a a 2 , 3 = v N R M + h M a a 3 , 1 = ω i e cos L + v E R N + h M a a 3 , 2 = v N R M + h ,
M a v = 0 1 R M + h 0 1 R N + h 0 0 tan L R N + h 0 0 ,
M a p = 0 0 v N R M + h 2 ω i e sin L 0 v E R N + h 2 ω i e cos L + v E sec 2 L R N + h 0 v E tan L R N + h 2 ,
M v a = f n × = 0 f U n f N n f U n 0 f E n f N n f E n 0 ,
M v v 1 , 1 = v N tan L v U R N + h M v v 1 , 2 = 2 ω i e sin L + v E tan L R N + h M v v 1 , 3 = 2 ω i e cos L v E R N + h M v v 2 , 1 = 2 ω i e sin L 2 v E tan L R N + h M v v 2 , 2 = v U R M + h M v v 2 , 3 = v N R M + h M v v 3 , 1 = 2 ω i e cos L + 2 v E R N + h M v v 3 , 2 = 2 v N R M + h M v v 3 , 3 = 0 ,
M v p 1 , 2 = M v p 2 , 2 = M v p 3 , 2 = 0 M v p 1 , 1 = v E v N sec 2 L R N + h + 2 v N ω i e cos L + 2 v U ω i e sin L M v p 1 , 3 = v E v U v E v N tan L R N + h 2 M v p 2 , 1 = 2 v E ω i e cos L v E 2 sec 2 L R N + h M v p 2 , 3 = v E 2 tan L R N + h 2 + v N v U R M + h 2 M v p 3 , 1 = 2 v E ω i e sin L M v p 3 , 3 = v E 2 R N + h 2 v N 2 R M + h 2 ,
M p a = 0 3 × 3 ,
M p v = 0 1 R M + h 0 sec L R N + h 0 0 0 0 1 ,
M p p = 0 0 v N R M + h 2 v E tan L sec L R N + h 0 v E sec L R N + h 2 0 0 0 .

References

  1. Yu, X.; Li, Q.; Jorge, P.Q.; Tomi, W. Applications of UWB Networks and Positioning to Autonomous Robots and Industrial Systems. In Proceedings of the Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 7–10 June 2021. [Google Scholar]
  2. Wang, S.; Carmen, M.A.; Jorge, P.Q.; Zhou, Z.; Tomi, W. UWB-Based Localization for Multi-UAV Systems and Collaborative Heterogeneous Multi-Robot Systems. Procedia Comput. Sci. 2020, 175, 357–364. [Google Scholar]
  3. Perakis, H.; Gikas, V. Evaluation of Range Error Calibration Models for Indoor UWB Positioning Applications. In Proceedings of the 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Nantes, France, 24–27 September 2018. [Google Scholar]
  4. Li, M.; Zhu, H.; You, S.; Tang, C. UWB-Based Localization System Aided with Inertial Sensor for Underground Coal Mine Applications. IEEE Sensors J. 2020, 20, 6652–6669. [Google Scholar] [CrossRef]
  5. Pelletier, J.; O’Neill, B.; Leonard, J.; Freitag, L.; Gallimore, E. AUV-Assisted Diver Navigation. IEEE Robot. Autom. Lett. 2008, 7, 10208–10215. [Google Scholar] [CrossRef]
  6. Xu, C.; Xu, C.; Xing, W.; Herrera, V.E. Guided Trajectory Filtering for Challenging Long-Range AUV Navigation. IEEE Trans. Instrum. Meas. 2023, 72, 8502510. [Google Scholar] [CrossRef]
  7. Fang, X.; Wang, C.; Nguyen, T.; Xie, L. Graph Optimization Approach to Range-Based Localization. IEEE Trans. Syst. Man Cybern. Syst. 2018, 51, 6830–6841. [Google Scholar] [CrossRef]
  8. Lin, Z.; Fu, M.; Diao, Y. Distributed Self Localization for Relative Position Sensing Networks in 2D Space. IEEE Trans. Signal Process. 2015, 14, 3751–3761. [Google Scholar] [CrossRef]
  9. Li, X.; Luo, X.; Zhao, S. Globally Convergent Distributed Network Localization Using Locally Measured Bearings. IEEE Trans. Control Netw. Syst. 2020, 7, 245–253. [Google Scholar] [CrossRef]
  10. Poulose, A.; Emeršič, Ž.; Eyobu, O.S.; Han, D.S. An Accurate Indoor User Position Estimator For Multiple Anchor UWB Localization. In Proceedings of the International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020. [Google Scholar]
  11. Lee, G.T.; Seo, S.B.; Jeon, W.S. Indoor Localization by Kalman Filter based Combining of UWB-Positioning and PDR. In Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2021. [Google Scholar]
  12. Nguyen, T.M.; Zaini, A.H.; Wang, C.; Guo, K.; Xie, L. Robust Target-Relative Localization with Ultra-Wideband Ranging and Communication. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar]
  13. Wu, Y.; Pan, X. Velocity/Position Integration Formula Part I: Application to In-Flight Coarse Alignment. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1006–1023. [Google Scholar] [CrossRef]
  14. Zheng, S.; Li, Z.; Liu, Y.; Zhang, H.; Zou, X. An Optimization-Based UWB-IMU Fusion Framework for UGV. IEEE Sensors J. 2022, 22, 4369–4377. [Google Scholar] [CrossRef]
  15. Cossette, C.C.; Shalaby, M.; Saussie, D.; Forbes, J.R.; Ny, J.L. Relative Position Estimation Between Two UWB Devices with IMUs. IEEE Robot. Autom. Lett. 2021, 6, 4313–4320. [Google Scholar] [CrossRef]
  16. Xu, Y.; Shen, T.; Chen, X.; Bu, L.; Feng, N. Predictive Adaptive Kalman Filter and Its Application to INS/UWB-integrated Human Localization with Missing UWB-based Measurements. Int. J. Autom. Comput. 2018, 16, 604–613. [Google Scholar] [CrossRef]
  17. Li, J.; Bi, Y.; Li, K.; Wang, K.; Lin, F.; Chen, B.M. Accurate 3D Localization for MAV Swarms by UWB and IMU Fusion. In Proceedings of the IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AK, USA, 12–15 June 2018. [Google Scholar]
  18. Feng, D.; Wang, C.; He, C.; Zhuang, Y.; Xia, X. Kalman-Filter-Based Integration of IMU and UWB for High-Accuracy Indoor Positioning and Navigation. IEEE Internet Things J. 2020, 7, 3133–3146. [Google Scholar] [CrossRef]
  19. Shalaby, M.; Cossette, C.C.; Ny, J.L.; Forbes, J.R. Cascaded Filtering Using the Sigma Point Transformation. IEEE Robot. Autom. Lett. 2021, 6, 4758–4765. [Google Scholar] [CrossRef]
  20. Pavlasek, N.; Walsh, A.; Forbes, J.R. Invariant Extended Kalman Filtering Using Two Position Receivers for Extended Pose Estimation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  21. Liu, Y.; Lian, B.; Zhou, T. Gaussian message passing-based cooperative localization with node selection scheme in wireless networks. Signal Process. 2019, 156, 166–176. [Google Scholar] [CrossRef]
  22. Shalaby, M.; Cossette, C.C.; Forbes, J.R.; Ny, J.L. Relative Position Estimation in Multi-Agent Systems Using Attitude-Coupled Range Measurements. IEEE Robot. Autom. Lett. 2021, 6, 4955–4961. [Google Scholar] [CrossRef]
  23. Zheng, S.; Li, Z.; Yin, Y.; Liu, Y.; Zhang, H.; Zheng, P.; Zou, X. Multi-robot relative positioning and orientation system based on UWB range and graph optimization. Measurement 2022, 195, 111068. [Google Scholar] [CrossRef]
  24. Großwindhager, B.; Stocker, M.; Rath, M.; Boano, C.A.; Römer, K. SnapLoc: An ultra-fast UWB-based indoor localization system for an unlimited number of tags. In Proceedings of the 18th International Conference on Information Processing in Sensor Networks, Montreal, QC, Canada, 16–18 April 2019; Volume 10, pp. 142–149. [Google Scholar]
  25. Wang, S.; Gu, D.; Chen, L.; Hu, H. Single Beacon based Localization with Constraints and Unknown Initial Poses. IEEE Trans. Ind. Electron. 2015, 63, 2229–2241. [Google Scholar] [CrossRef]
  26. Zhang, T.; Wang, Z.; Li, Y.; Tong, J. A Passive Acoustic Positioning Algorithm Based on Virtual Long Baseline Matrix Window. J. Navig. 2019, 72, 193–206. [Google Scholar] [CrossRef]
  27. Lu, L.; Li, Y.; Rizos, C. Virtual baseline method for Beidou attitude determination—An improved long-short baseline ambiguity resolution method. Adv. Space Res. 2013, 51, 1029–1034. [Google Scholar] [CrossRef]
  28. Reis, J.O.; Batista, P.T.M.; Oliveira, P.; Silvestre, C. Source Localization Based on Acoustic Single Direction Measurements. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 2837–2852. [Google Scholar] [CrossRef]
  29. Chen, L.; Cao, K.; Xie, L.; Li, X.; Feroskhan, M. 3-D Network Localization Using Angle Measurements and Reduced Communication. IEEE Trans. Signal Process. 2022, 70, 2402–2415. [Google Scholar] [CrossRef]
  30. Yuan, M.; Li, Y.; Li, Y.; Pang, S.; Zhang, J. A fast way of single-beacon localization for AUVs. Appl. Ocean Res. 2022, 119, 103037. [Google Scholar] [CrossRef]
  31. Cao, Y.; Yang, C.; Li, R.; Knoll, A.; Beltrame, G. Accurate position tracking with a single UWB anchor. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
  32. Yang, J.; Wu, M.; Cong, Y.; Guo, Y.; Zhou, X.; Tang, K. A Novel Single-beacon Positioning with Inertial Measurement Optimization. IEEE Robot. Autom. Lett. 2023. [Google Scholar] [CrossRef]
  33. IEEE Std 952-1997; IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Interferometric Fiber Optic Gyros. The Institute of Electrical and Electronics Engineers, Inc.: New York, NY, USA, 1998.
  34. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems. IEEE Aerosp. Electron. Syst. Mag. 2015, 30, 26–27. [Google Scholar] [CrossRef]
  35. El-Sheimy, N.; Hou, H.; Niu, X. Analysis and modeling of inertial sensors using Allan variance. IEEE Trans. Instrum. Meas. 2008, 57, 140–149. [Google Scholar] [CrossRef]
  36. Cano, J.; Chidami, S.; Ny, J.L. A Kalman Filter-Based Algorithm for Simultaneous Time Synchronization and Localization in UWB Networks. IEEE Robot. Autom. Lett. 2021, 6, 4955–4961. [Google Scholar]
  37. Mueller, M.W.; Hehn, M.; D’Andrea, R. A Computationally Efficient Motion Primitive for Quadrocopter Trajectory Generation. IEEE Trans. Robot. 2015, 31, 1294–1310. [Google Scholar] [CrossRef]
  38. Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics; John Wiley & Sons: Hoboken, NJ, USA, 2019; pp. 87–110. [Google Scholar]
  39. Engelhardt, B. Introduction: Exponential family, conjugacy, and sufficiency. In STA561: Probabilistic Machine Learning; Princeton University: Princeton, NJ, USA, 2013; pp. 5–6. [Google Scholar]
Figure 1. The block diagram of the classification of inertial errors.
Figure 1. The block diagram of the classification of inertial errors.
Drones 07 00577 g001
Figure 2. Sensors and platform used in this paper.
Figure 2. Sensors and platform used in this paper.
Drones 07 00577 g002
Figure 3. The process of the optimization algorithm. The four blocks marked with red dashed boxes indicate the following subsections.
Figure 3. The process of the optimization algorithm. The four blocks marked with red dashed boxes indicate the following subsections.
Drones 07 00577 g003
Figure 4. A block diagram of the architecture of the single-anchor cooperative positioning method. The method includes two threads: optimization and filtering.
Figure 4. A block diagram of the architecture of the single-anchor cooperative positioning method. The method includes two threads: optimization and filtering.
Drones 07 00577 g004
Figure 5. Agent 1 and Agent 3 move anticlockwise in the horizontal plane along a square route for 7 circles and a half, as plotted by the arrow. Agent 2 and Agent 4 have a vertical takeoff and landing process, which move anticlockwise in a horizontal plane 100 m higher than Agent 1 and Agent 3.
Figure 5. Agent 1 and Agent 3 move anticlockwise in the horizontal plane along a square route for 7 circles and a half, as plotted by the arrow. Agent 2 and Agent 4 have a vertical takeoff and landing process, which move anticlockwise in a horizontal plane 100 m higher than Agent 1 and Agent 3.
Drones 07 00577 g005
Figure 6. The inertial measurements and unbiased truth−values in the simulation, which include angular velocity increments ω x , ω y , and ω z and specific force incrementd f x , f y , and f z . x, y, and z indicate the direction of Right−Ahead−Up in the body frame.
Figure 6. The inertial measurements and unbiased truth−values in the simulation, which include angular velocity increments ω x , ω y , and ω z and specific force incrementd f x , f y , and f z . x, y, and z indicate the direction of Right−Ahead−Up in the body frame.
Drones 07 00577 g006
Figure 7. The trajectories of pure optimization are shown in (a,b), and the trajectories of pure filtering are shown in (c,d). The trajectories in (a,b) have no height constraints, while the trajectories in (b,c) have height constraints.
Figure 7. The trajectories of pure optimization are shown in (a,b), and the trajectories of pure filtering are shown in (c,d). The trajectories in (a,b) have no height constraints, while the trajectories in (b,c) have height constraints.
Drones 07 00577 g007
Figure 8. The horizontal and three-dimensional projections of the trajectories solved using the single-anchor positioning method without the ranging measurements between the agents in simulation experiment.
Figure 8. The horizontal and three-dimensional projections of the trajectories solved using the single-anchor positioning method without the ranging measurements between the agents in simulation experiment.
Drones 07 00577 g008
Figure 9. The horizontal and three-dimensional projections of the trajectories solved using the single-anchor cooperative positioning method in simulation experiment.
Figure 9. The horizontal and three-dimensional projections of the trajectories solved using the single-anchor cooperative positioning method in simulation experiment.
Drones 07 00577 g009
Figure 10. The simulated positioning errors of two single−anchor positioning methods.
Figure 10. The simulated positioning errors of two single−anchor positioning methods.
Drones 07 00577 g010
Figure 11. The satellite imagery and sketch map of the real−world experiment environment. The anchor is placed as the green arrow in (a) and red star in (b). The starting points of three UGVs (or agents) are marked in (b), which are converted from navigation frame to a local coordinate frame.
Figure 11. The satellite imagery and sketch map of the real−world experiment environment. The anchor is placed as the green arrow in (a) and red star in (b). The starting points of three UGVs (or agents) are marked in (b), which are converted from navigation frame to a local coordinate frame.
Drones 07 00577 g011
Figure 12. The UGVs moving in the experimental environment. (a) is the picture of the vehicles and (b) indicates the reference trajectory of Agent 1. (c) shows the estimated trajectories by pure filtering.
Figure 12. The UGVs moving in the experimental environment. (a) is the picture of the vehicles and (b) indicates the reference trajectory of Agent 1. (c) shows the estimated trajectories by pure filtering.
Drones 07 00577 g012
Figure 13. The real−world positioning errors of two single−anchor positioning methods in the East, North, and Up directions in the navigation frame.
Figure 13. The real−world positioning errors of two single−anchor positioning methods in the East, North, and Up directions in the navigation frame.
Drones 07 00577 g013
Table 1. The number of conditional probabilities multiplied in Equation (37). There are four categories of conditional probabilities, including P ω ˜ i b , A b | ω i b , A b for the angular velocity of agent A, P f ˜ A b | f A b for the specific force of agent A, P R ˜ A | ω i b , A b , f A b for the range between agent A and the single anchor, and P R ˜ A B | ω i b , A b , f A b , ω i b , B b , f B b for the range between agent A and agent B (measured by A).
Table 1. The number of conditional probabilities multiplied in Equation (37). There are four categories of conditional probabilities, including P ω ˜ i b , A b | ω i b , A b for the angular velocity of agent A, P f ˜ A b | f A b for the specific force of agent A, P R ˜ A | ω i b , A b , f A b for the range between agent A and the single anchor, and P R ˜ A B | ω i b , A b , f A b , ω i b , B b , f B b for the range between agent A and agent B (measured by A).
Conditional Probability P ω ˜ i b , A b | ω i b , A b P f ˜ A b | f A b P R ˜ A | ω i b , A b , f A b P R ˜ A B | ω i b , A b , f A b , ω i b , B b , f B b
NumberNNN N 2 N
Table 2. Coordinates of the anchor and the starting points of the agents in the local coordinate frame.
Table 2. Coordinates of the anchor and the starting points of the agents in the local coordinate frame.
PointCoordinate
Anchor [ 200 , 200 , 5 ]
Starting point of Agent 1 [ 100 , 150 , 0 ]
Starting point of Agent 2 [ 50 , 100 , 0 ]
Starting point of Agent 3 [ 100 , 150 , 0 ]
Starting point of Agent 4 [ 50 , 100 , 0 ]
Table 3. Parameters used in the simulation experiment.
Table 3. Parameters used in the simulation experiment.
ParameterValue
Gyroscope constant bias b g 0.2 / h
Gyroscope angular random walk σ g 0.02 / h
Accelerometer constant bias b a 2 mg
Accelerometer velocity random walk σ a 0.2 mg / Hz
Inertial sensor frequency0.01 s
Ranging standard deviation σ R 0.1 m
Range sensor frequency0.1 s
Table 4. The RMSE of the positioning results of two single-anchor positioning methods in simulation experiment. For each agent and each method, there are the total RMSE and its components in the East, North, and Up directions, with the unit in meter.
Table 4. The RMSE of the positioning results of two single-anchor positioning methods in simulation experiment. For each agent and each method, there are the total RMSE and its components in the East, North, and Up directions, with the unit in meter.
Agent & MethodEastNorthUpTotal
Single Agent, Agent 15.528817.808139.687043.8492
Cooperative, Agent 11.60371.79139.25959.5665
Single Agent, Agent 218.067320.462747.872655.1084
Cooperative, Agent 23.55708.021014.085816.5952
Single Agent, Agent 39.151311.699692.245093.4332
Cooperative, Agent 35.23485.355022.104323.3384
Single Agent, Agent 414.770311.524837.770742.1617
Cooperative, Agent 45.28125.992717.092718.8671
Table 5. Parameters used in the real-world experiment.
Table 5. Parameters used in the real-world experiment.
ParameterValue
Gyroscope constant bias b g 1 / h
Gyroscope angular random walk σ g 0.1 / h
Accelerometer constant bias b a 5 mg
Accelerometer velocity random walk σ a 0.5 mg / Hz
Inertial sensor frequency0.005 s
Ranging standard deviation σ R 0.1 m
Range sensor frequency0.02 s
Table 6. The RMSE of the positioning results of two single-anchor positioning methods in real-world experiment. For each agent and each method, there are the total RMSE and its components in the East, North, and Up directions, with the unit in meter.
Table 6. The RMSE of the positioning results of two single-anchor positioning methods in real-world experiment. For each agent and each method, there are the total RMSE and its components in the East, North, and Up directions, with the unit in meter.
Agent & MethodEastNorthUpTotal
Single Agent, Agent 12.46774.79363.17906.2589
Cooperative, Agent 12.46724.97431.32405.7082
Single Agent, Agent 22.42404.74661.75106.4677
Cooperative, Agent 22.26014.96883.66415.7326
Single Agent, Agent 32.72175.63373.92027.3833
Cooperative, Agent 33.15886.81851.07727.5915
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, J.; Guo, Y.; Tang, K. A Single-Anchor Cooperative Positioning Method Based on Optimized Inertial Measurement for UAVs. Drones 2023, 7, 577. https://doi.org/10.3390/drones7090577

AMA Style

Yang J, Guo Y, Tang K. A Single-Anchor Cooperative Positioning Method Based on Optimized Inertial Measurement for UAVs. Drones. 2023; 7(9):577. https://doi.org/10.3390/drones7090577

Chicago/Turabian Style

Yang, Jinyi, Yan Guo, and Kanghua Tang. 2023. "A Single-Anchor Cooperative Positioning Method Based on Optimized Inertial Measurement for UAVs" Drones 7, no. 9: 577. https://doi.org/10.3390/drones7090577

Article Metrics

Back to TopTop