Next Article in Journal
Rice Yield Prediction in Different Growth Environments Using Unmanned Aerial Vehicle-Based Hyperspectral Imaging
Previous Article in Journal
Deep Learning-Based Improved WCM Technique for Soil Moisture Retrieval with Satellite Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy

1
Navigation Research Center, School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2006; https://doi.org/10.3390/rs15082006
Submission received: 28 February 2023 / Revised: 5 April 2023 / Accepted: 6 April 2023 / Published: 10 April 2023
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)

Abstract

:
This paper focuses on the cooperative navigation of heterogeneous air-ground vehicle formations in a Global Navigation Satellite System (GNSS) challenged environment and proposes a cooperative navigation method based on motion estimation and a regionally optimal path planning strategy. In air-ground vehicle formations, unmanned ground vehicles (UGVs) are equipped with low-precision inertial navigation measurement units and wireless range sensors, which interact with unmanned aerial vehicles (UAVs) equipped with high-precision navigation equipment for cooperative measurement information and use the UAVs as aerial benchmarks for cooperative navigation. Firstly, the Interacting Multiple Model (IMM) algorithm is used to predict the next moment’s motion position of the UGVs. Then regional real-time path optimization algorithms are introduced to design the motion position of the high-precision UAVs so as to improve the formation’s configuration and reduce the geometric dilution of precision (GDOP) of the configuration. Simulation results show that the Dynamic Optimal Configuration Cooperative Navigation (DOC-CN) algorithm can reduce the GDOP of heterogeneous air-ground vehicle formations and effectively improve the overall navigation accuracy of the whole formation. The method is suitable for the cooperative navigation environment of heterogeneous air-ground vehicle formations under GNSS-challenged conditions.

Graphical Abstract

1. Introduction

With many advantages of low loss, zero casualties, and high mobility, unmanned motion vehicles are widely used in military and civilian fields, such as reconnaissance and strike, search and rescue, environmental monitoring, and resource exploration [1,2,3]. The characteristics and advantages of various unmanned vehicles are different. Fixed-wing UAVs have fast maneuverability, a wide field of view, and are not restricted by terrain [4]. Rotary-wing UAVs have a simple structure, low cost, good concealment, and are easy to transport and deploy on a large scale [5]. Unmanned ground vehicles (UGV) have the characteristics of considerable size and strong carrying capacity [6]. The formation composed of UAVs and UGVs can cooperate in complex, unknown, and dynamic environments to accomplish tasks through multidimensional sensing, information interaction, and collaborative interoperability. At present, various countries are conducting research on cross-domain collaboration projects, including the SHERPA project proposed by the European Union, which aims to build a system for searching and rescuing people in mountainous areas using aerial and ground-based unmanned platforms [7]. The ROBOSAMPLER project funded by Portugal aims to use rotary-wing UAVs and UGVs to build a hazardous substance sampling platform suitable for complex wild environments [8]. In offensive swarm-enabled tactics, the United States uses a multi-platform unmanned swarm system composed of UGVs, fixed-wing, and rotary-wing UAVs to conduct reconnaissance on targets in a simulated urban environment [9]. Heterogeneous air-ground vehicle formations have good development potential in various fields, among them, formation navigation and positioning technology is an important part of the formation control system. This paper proposes a cooperative navigation algorithm for air-ground vehicles to ensure the overall positioning performance of formations and takes into account the respective advantages and shortcomings of different unmanned vehicles in environmental perception and movement characteristics. We construct a heterogeneous air-ground cooperative system through aerial UAVs for wide-range high-altitude observation and UGVs for close reconnaissance, which has the advantages of distributed functions, high system survival rate, and high efficiency [10,11,12]. The cooperative navigation algorithm can reduce the number of sensors carried by the vehicles and reduce the performance requirements of the navigation system on the on-board computing platform. Meanwhile, the use of distributed cooperative navigation architecture can overcome the problems of poor scalability and weak anti-destruction capability of traditional centralized navigation architecture and reduce the communication burden among vehicles.
Accurate positioning information is the critical factor affecting heterogeneous air-ground vehicle formations’ ability to execute various tasks. Satellite navigation is the primary method used by air-ground vehicle formations to achieve their respective positioning. Nevertheless, GNSS-challenged situations may occur in air-ground vehicle formations when executing missions in areas, such as buildings and jungles, caused by occlusion [13,14]. The GNSS system is easily interfered within the complex battlefield environment due to low signal power [15]. The positioning accuracy of ordinary GNSS equipment cannot meet the intended requirements of the navigation system, and in some situations, requiring high accuracy need to be equipped with high precision satellite navigation equipment such as Real Time Kinematic (RTK). However, equipping each vehicle with such a device is too expensive and difficult to implement. Satellite independent navigation means currently include scene matching, terrain matching, astronomical navigation, visual navigation, and so on [16,17,18,19], but the above navigation sensors are no longer applicable under the high dynamic motion characteristics, lower computational performance, and complex environmental constraints of unmanned vehicles. Therefore, improving the positioning accuracy through cooperative navigation methods has become the current research hotspot for air-ground vehicle formations navigation [20,21,22].
In order to improve the overall localization performance of motion vehicle formations, many scholars have researched multi-source fusion algorithms. Vetrella et al. proposed a cooperative navigation method that incorporates inertial, magnetometer, available satellite pseudorange, cooperative UAV position, and monocular camera information, effectively improving the navigation performance of UAV swarms in GPS-constrained situations [23]. Indelman et al. proposed a method for distributed vision-aided cooperative localization and navigation of multiple inter-communicating autonomous vehicles based on three-view geometric constraints, allowing localization when different vehicles observe the same scene [24]. GAO et al. proposed an on-board cooperative positioning scheme based on integrated ultra-wideband (UWB) and GNSS that can obtain better positioning accuracy than the decimeter level [25]. Xiong et al. integrated the use of satellites, ground stations, inertial, inter-node ranging and speed measurement, and random signal sources to achieve cooperative positioning between vehicles [26].
Under the computational performance constraint of the navigation platform, the positioning accuracy can be improved by the preferential selection of the available cooperative navigation information. Therefore, numerous scholars have conducted corresponding research on the influence of the position distribution of each vehicle in the cooperative navigation system on positioning accuracy. Chen et al. proposed a cooperative dilution of precise (C-DOP) calculation method combining ranging error, clock error, and position error of cooperative UAVs to analyze the positioning error of UAV swarm under different formations [27]. Heng et al. proposed a generalized theory where lower bound on expectation of average geometric DOP (LB-E-AGDOP) can be used to quantify positioning accuracy and demonstrated a strong link between LB-E-AGDOP and best achievable accuracy [28]. Huang et al. used the collaborative dilution of precision (CDOP) model to specify the effect of relative distance measurement accuracy, the number of users, and their distribution on localization [29]. Causa et al. proposed a concept of the generalized accuracy factor and investigated the accuracy calculation method of cooperative configuration based on visual measurement. The experimental results showed that the UAV swarm could achieve meter-level positioning accuracy with the aid of visual measurement under the appropriate cooperative configuration [30]. Sivaneri et al. used the UGV to assist another UAV for positioning, thus improving the positioning geometry of the UAV with a low number of satellites [31]. Although there are numerous studies on cooperative navigation systems, they mainly focus on the acquisition and fusion methods of navigation information. There is no in-depth research on improving the cooperative navigation accuracy of air-ground vehicle formations through the configuration optimization of formations.
A new approach for cooperative navigation of heterogeneous air-ground vehicle formations is proposed in this paper. Firstly, we use the IMM algorithm to predict the motion state of UGVs, then construct a cost function based on the GDOP value of the whole air-ground vehicle formation. Then, it traverses the motion range of UAVs and selects the position where the minimum cost is located as the position where UAVs should arrive at the next moment. Finally, the UGVs localization calculation is completed by fusing the cooperative range values through the Kalman Filter. The simulation results show that the method proposed in this paper can achieve the effect of real-time optimization of configuration, reduce the error of cooperative navigation, and provide guidelines for the deployment and mission execution of heterogeneous air-ground vehicle formations.

2. Measurement Model

The following scenario is considered in this paper, as shown in Figure 1, heterogeneous air-ground vehicle formations execute missions in complex scenarios (e.g., urban areas, forests, canyons, etc.). In the above scenario, the GNSS signal received by UGV is easily interrupted and deceived due to obstacle blockage and active jamming, so the regional navigation and positioning system is constructed by UAVs to provide positioning service for UGVs. The UGVs accept the absolute position information of the UAVs and the inter-range information broadcasted by the air reference, then complete their own positioning calculation through the spatial geometry constraint relationship. In terms of navigation sensor configuration, UAVs flying at higher altitude are equipped with high-precision navigation equipment, such as high-precision IMU, differential GPS, and altimeters. UGVs that execute missions in urban alleyways carry lower accuracy IMU and other navigation equipment. Navigation data and sensor data are shared between all vehicles via a wireless network.
For the cooperative navigation system shown in Figure 1, we can introduce two navigation coordinate systems: Earth-Centered, Earth-Fixed (ECEF) and geographic coordinate system, denoted by e and g , respectively. All high-altitude UAVs are denoted by H ; ground-based UGVs are denoted by G . The position parameters of vehicles i , α n are denoted as p i = [ x i   y i   z i ]   p α n = [ x α n   y α n   z α n ] , where i G   α n H . The speed parameters are denoted as v i = [ v i , x   v i , y   v i , z ]   v α n = [ v α n , x   v α n , y   v α n , z ] .
Wireless ranging exists now with many kinds of measurement methods, such as Time of Arrival (TOA), Time Difference of Arrival (TDOA), Received Signal Strength Indication (RSSI), and so on. For the distance measurement error, the RSSI measurement method distance measurement error is generally modeled as a log-normal distribution [32]. Most of the TOA-based methods are modeled as zero-mean Gaussian random variables in the line-of-sight case [33]. In the non-line-of-sight (NLOS) case, the ranging error is generally modeled as the superposition of the distance difference, measurement noise, and NLOS error due to clock error [34].
Assuming a zero-mean Gaussian distribution for the range error and perfect clock synchronization for all the high-altitude UAVs, the range values in this paper are of the following form.
r α n i = d α n i + c t α n + n i d α n i = ( x α n e x i e ) 2 + ( y α n e y i e ) 2 + ( z α n e z i e ) 2
where d α n i denotes the actual distance between vehicle i and vehicle α n ; c is the speed of light; t α n is the clock error; n i N ( 0 , δ 2 ) is Additive White Gaussian Noise with mean zero and variance δ 2 ; the superscript e indicates the ECEF coordinate system.
The problem of localization in NLOS environments is described in the literature [35,36] and is not analyzed in this paper.

3. Cooperative Navigation System

For the cooperative navigation scenario depicted in Figure 1, high-altitude fixed-wing UAVs and rotary-wing UAVs can provide cooperative navigation information assistance to UGVs, and the accuracy of the final cooperative navigation is not only related to the accuracy of the navigation sensors but also to the flight configuration of the UAVs. Since different types of vehicles have different speeds and states during movement, UGVs need to adjust their driving state according to the actual environment, such as bypassing obstructions. The whole heterogeneous air-ground vehicle formations cannot execute the mission in a fixed configuration. UAVs need to adjust the flight configuration in real-time according to the position of the UGVs.
The merit of the configuration can be measured by Dilution of precision (DOP), which can correlate the configuration with the positioning accuracy, and the DOP based on inter-vehicles wireless range is calculated as follows.
Set the approximate position of the vehicle i as p ˜ i e = [ x ˜ i e   y ˜ i e   z ˜ i e ] and the approximate clock difference as t ˜ α n , after neglecting the measurement noise and NLOS error in Equation (1), the Taylor expansion at x ˜ i e and retaining the first-order term yields, Δ r α n i can be denoted as follows.
Δ r α n i = h x α n Δ x i e + h y α n Δ y i e + h z α n Δ z i e c Δ t α n
where ( h x α n   h y α n   h z α n ) is the direction cosine of the vehicle α n to vehicle i ; ( Δ x i e   Δ y i e   Δ z i e ) is the difference between the approximate position of the vehicle i and the actual position; Δ t α n is the deviation between the accurate and approximate clock difference.
Equation (2) can be extended to the following form.
Δ r α 1 i Δ r α 2 i Δ r α n i = h x α 1 h y α 1 h z α 1 1 h x α 2 h y α 2 h z α 2 1 h x α n h y α n h z α n 1 Δ x i e Δ y i e Δ z i e c Δ t α n = H i Δ x i e
Based on Equation (3), the position and clock deviation vectors of the vehicle can be obtained as:
Δ x i e = ( H i T H i ) 1 H i T Δ r
The error covariance of the deviation vector can be defined as:
cov ( δ Δ x i e ) = ( H i T H i ) 1 H i T cov ( δ Δ r ) [ ( H i T H i ) 1 H i T ] T = ( H i T H i ) 1 δ 2
GDOP is then defined as the square root of the trace of ( H i T H i ) 1 . The GDOP of all UAVs with respect to the vehicle i is defined as:
GDOP t r ( H i T H i ) 1
For the whole cooperative navigation system, each UGV calculates its position by IMU and fuses the cooperative information from UAVs by the Kalman filter. At the k 1 moment, the UGVs need to predict their position at the k moment by the IMM algorithm; so as to judge the position that UAVs should reach when the sum of GDOP corresponds to all UGVs at the k moment is the smallest; so that the configuration of the whole cooperative navigation system can be adjusted in real-time to provide the UGVs with the optimal cooperative information. Therefore, the cooperative navigation algorithm proposed in this paper is divided into three main parts: position prediction of UGVs, cost function calculation and optimal position selection of the high-altitude UAVs, and inertial/co-ranging value fusion of UGVs.

3.1. Position Prediction of UGVs

In order to carry out the route planning of high-altitude UAVs, the position of UGVs at the k 1 moment needs to be estimated first. Ground unmanned vehicles have strong mobility on the ground, and the rapid switching of their movement modes leads to drastic changes in parameters, such as heading, velocity, and acceleration, etc. The traditional single-model filter has a slow convergence speed and poor stability in the accuracy of state estimation and prediction for targets in a highly dynamic motion state, and this paper introduces multi-model filters to predict the position of UGVs at the k moment.
The core of the IMM algorithm lies in describing the target’s maneuver using a set of models and filters working in parallel, corresponding to different maneuver states, switching between models follows a known Markov process, and the final estimate is a weighted value of all model state estimates [37,38]. The commonly used models are mainly the uniform velocity model, uniform acceleration model, current statistical model, Singer model, and so on [39,40]. In order to effectively characterize the maneuvering characteristics of the unmanned vehicle in ground motion, and to improve the robustness of the IMM filter and reduce the computational effort of the system, the uniform velocity model and uniform acceleration model are used to describe the motion state of each ground vehicle in this paper. The specific steps of the IMM algorithm designed in this paper are as follows.
Step 1 Input interaction module. i , j [ 1   for   CV   model , 2   for   CA   model ] , k [ 1 , , K ]
A. Mixing Probability Calculation.
μ i | j ( k 1 | k 1 ) = p i j μ i ( k 1 ) / c ¯ j c ¯ j = i = 1 2 p i j μ i ( k 1 )
where μ i | j ( k 1 | k 1 ) represents the transition probability of the state estimation of model i at the k 1 moment to model j at the k moment, μ i ( k 1 ) denotes the model probability of model i at the k 1 moment, c ¯ j is the normalization factor, p i j is the transfer probability from model i to model j .
B. Mixing state estimation and Covariance matrix Calculation.
x ^ 0 j ( k 1 | k 1 ) = i = 1 2 x ^ i ( k 1 | k 1 ) μ i | j ( k 1 | k 1 ) P ^ 0 j ( k 1 | k 1 ) = i = 1 2 μ i | j ( k 1 | k 1 ) { P i ( k 1 | k 1 ) + [ x ^ i ( k 1 | k 1 ) x ^ 0 j ( k 1 | k 1 ) ] [ x ^ i ( k 1 | k 1 ) x ^ 0 j ( k 1 | k 1 ) ] T }
where x ^ i ( k 1 | k 1 ) and P i ( k 1 | k 1 ) are the state estimation and the covariance matrix of model i at the k 1 moment, respectively.
Step 2 Model filter estimation module.
A. one-step prediction for sub-model
x ^ j ( k | k 1 ) = F x ^ 0 j ( k 1 | k 1 ) P ^ j ( k | k 1 ) = F P ^ 0 j ( k 1 | k 1 ) F T + G Q ( k ) G T
where F , G are the state transfer matrix and the system noise matrix of the model, respectively, and the state equations of the uniform velocity model and the uniform acceleration model are shown in the literature [41]. Q ( k ) is the variance matrix of the system noise.
B. position prediction
x ^ ( k | k 1 ) = j = 1 2 x ^ j ( k | k 1 ) μ j ( k 1 )
where x ^ ( k | k 1 ) is the estimated value of the position of UGV at the k moment.
Step 3 Update of model probability module. ( j = 1 , 2 )
A. Filtered residuals
ν j ( k ) = Z j ( k ) H j x ^ j ( k | k 1 ) S j ( k ) = H j P ^ j ( k | k 1 ) H j T + R j
where Z j ( k ) is the position output obtained by fusing the IMU/co-ranging values of the vehicle at the k moment, as described in Section 3.3, and H j is the measurement matrix.
B. Kalman filter gain calculation
K j = P ^ j ( k | k 1 ) H j T ( S j ( k ) ) 1
C. Sub-model measurement update
x ^ j ( k | k ) = x ^ j ( k | k 1 ) + K j ν j ( k ) P ^ j ( k | k ) = ( I K j H j ) P ^ j ( k | k 1 ) ( I K j H j ) T + K j R j K j T
D. Mode Probability Update ( j = 1 , 2 )
First, the likelihood function is calculated with the following equation.
Λ j ( k ) = N ( ν j ( k ) , 0 , S j ( k ) )
where Λ j ( k ) is the likelihood function, N ( ν j ( k ) , 0 , S j ( k ) ) denotes the Gaussian density function of ν j ( k ) with zero mean and covariance S j k . The updated model probabilities are denoted as:
μ j ( k ) = Λ j ( k ) c ¯ j i = 1 2 Λ i ( k ) c ¯ i
Step 4 Estimation fusion module.
x ^ ( k | k ) = j = 1 2 x ^ j ( k | k ) μ j ( n ) P ( k | k ) = j = 1 2 μ j ( k ) { P ^ j ( k | k ) + [ x ^ j ( k | k ) x ^ ( k | k ) ] [ x ^ j ( k | k ) x ^ ( k | k ) T ] }
The first two steps in the above steps are completed at the k 1 moment, thus predicting the position of the UGVs at the k moment. The last two steps are performed after the completion of the vehicle IMU/co-ranging assistance at the k moment to update the model data of the IMM algorithm.

3.2. Cost Functions and Path Planning Strategies

The positioning accuracy of the UAVs depend on the ranging error, the air-based reference position error, and the UAVs configuration distribution. The range error is mainly determined by the clock difference between the two vehicles, the air-based reference position error mainly depends on the GNSS positioning accuracy, and the above two points are determined by the sensor hardware performance, so the UAV needs to dynamically adjust the flight position to form different formation configurations to provide cooperative navigation services for the UGVs. After the predicted position of the UGVs at the k moment, the position prediction value needs to be used to plan the position of the UAVs to achieve the optimal configuration. Path planning algorithms are mainly classified into three categories, namely traditional path planning algorithms, sampling-based path planning algorithms, and intelligent bionic algorithms, among which the A* algorithm is the mainstream algorithm in the field of path planning at present [42,43]. The advantage of the A* algorithm is its rapid response to the environment and high computational efficiency. It is a heuristic search algorithm that allows the UAV to quickly plan a route and generate maneuver control commands with known starting and ending points [44].
This paper combines the need of real-time configuration optimization and the idea of the sparse A* algorithm to propose a real-time memoryless path optimization method. Taking the current position of the UAV as the center and dividing the neighborhood space to establish a grid centered on the UAV, that is, the area to be extended, and the combination of all the UAVs’ areas to be extended, is called the search space. Traversing all the combinations of ways in the search space to find the grid position corresponding to the minimum cost, that is, the planning position of the UAV at the next moment. First, we establish the cost function on the voyage.

3.2.1. Cost Function Establishment

In the cooperative mission execution of heterogeneous air-ground vehicle formations, borrowing the A* algorithm for trajectory planning from the literature [45], the cost can generally be divided into two parts, namely the route flight cost and the GDOP cost. The route flight cost mainly includes the distance flown, as it will determine the duration of the UAV, and the route flight cost can also include the distance from the target point if there is a target point for the mission. The GDOP cost is the sum of the GDOP values corresponding to each UGV during the execution of the mission. There is a coupling relationship between the GDOP cost and the route flight cost, so the two need to be considered together.
The cost function is defined as:
J ( t ) = g ( t ) + r ( t )
where g ( t ) denotes the GDOP cost, and assuming that the number of ground vehicles is N , g ( t ) can be designed as:
g ( t ) = ω 1 i = 1 N GDOP i
where r ( t ) denotes the flight cost of the route, and assuming that the number of aerial vehicles is M , r ( t ) is designed as:
r ( t ) = ω 2 α n = 1 M D α n
In Equations (18) and (19), ω 1 and ω 2 are the weights of GDOP cost and route flight cost, respectively, which ω 1   ω 2 can be adjusted and used to choose whether to give priority to guaranteeing the GDOP value or reducing the flight distance. In this system, the fixed-wing UAVs are selected as the aerial benchmarks, and the movement speed is larger than that of the rotary-wing UAVs, which can reach the preset position quickly, so ω 1 is set to 1/3 and ω 2 is set to 2/3. D α n denotes the distance flown by the UAV α n from the k 1 moment to the k moment.

3.2.2. Path Planning Strategy

When establishing the area to be extended for UAVs, the constraints of UAVs need to be considered simultaneously, including the maximum movement step L max , minimum movement step L min , maximum pitch angle θ max , minimum flight height H min , and minimum collision avoidance distance R min between two UAVs. According to the current position of the UAVs and the constraints, the path planning process is as follows.
A. Search space establishment
As shown in Figure 2, the region to be extended for the high-altitude UAVs is established with the current position as the center and the constraints.
Split the area to be extended, as shown in Figure 2. The horizontal profile of the area to be extended is a circle, as shown in Figure 3, and the circle can be split into l parts, each of which has a vertical profile of a sector ring, as shown in Figure 4.
Divide the sector ring into n copies along the radius direction and m copies along the arc direction, so that the area to be expanded is divided into m × n × l expansion nodes. The combinations of extended nodes of M airborne UAVs have ( m × n × l ) M ways, and the combinations that do not satisfy the height constraint and the minimum collision avoidance distance constraint are removed, and the remaining combinations will constitute the search space at the k 1 moments.
B. Calculation of the cost function
Based on the UGVs’ location predicted by Equation (10), all node combinations in the search space are traversed, and the corresponding generation value is calculated according to Equation (17) to find the least costly node combination as the location where the UAVs should arrive at the k moment.
C. Path Planning
The positions obtained in step B are only isolated coordinate points and need to be combined with the motion characteristics of the UAV to plan a motionable route, and the specific route planning algorithm can be found in the literature [46].

3.3. Inertial/Co-Ranging Value Fusion Method for UGVs

When the UAVs establish the aerial benchmarks, the UGVs complete the cooperative information interaction and distance calculation through the data chain with the UAVs, and the Kalman filter needs to be used to complete the navigation information fusion. Under the Kalman filtering framework, the system state equation is constructed by using the position and attitude information obtained from the inertial sensors as state quantities; then the best state estimate of the previous moment is combined with the equation of motion to complete the one-step prediction; finally, the relative distance is used as the observed quantity to construct the measurement update equation, and the above operations are cycled to complete the cooperative navigation and positioning solution [47]. Compared with the cooperative navigation technique based on factor graph theory, the Kalman filter has the advantages of high computational efficiency, good real-time performance, and low communication load requirement, which can be realized by engineering [48].

3.3.1. State Equation

In the process of UAV route planning, it also continuously sends range signals and position information, and after receiving this information, the UGV can fuse it with its own inertial information to make effective corrections to the inertial information.
Using the geographic coordinate system as the navigation coordinate system, the state vector of the UGV i navigation system is defined as follows [49].
X = [ φ x g   φ y g   φ z g   δ v x g   δ v y g   δ v z g   δ L   δ λ   δ h ε b x   ε b y   ε b z   ε r x   ε r y   ε r z   x   y   z ] T
where the superscript g represents the geographic coordinate system. where φ x g   φ y g   φ z g and δ v x g   δ v y g   δ v z g are platform angle error and velocity error of east, north, and up directions, respectively. δ L   δ λ   δ h are, respectively, latitude error, longitude error, and altitude error. ε b x   ε b y   ε b z   ε r x   ε r y   ε r z are the gyro constant drift errors and the first-order Markov drift errors, respectively; x   y   z are the accelerometer biases.
The state equation can be constructed according to the defined state vector
X = A X + B W
where A is the linearized INS error states matrix, B is the noise transfer matrix, W is the system process noise with multivariate mean normal distribution with variance Q I N S , whose value is determined by the accuracy of gyroscope and accelerometer.

3.3.2. Measurement Equation

The measurement equation of the filter can be defined as
Z = H I N S X + V
where Z is the observation vector related to the wireless ranging measurements, H I N S is the observation matrix, V is the measurement noise of wireless ranging with R . The observation vector and observation matrix are defined as:
Z = r α 1 , I N S r α 1 i r α 2 , I N S r α 2 i r α n , I N S r α n i
H I N S = [ 0 n × 6   H i H e g   0 n × 9 ]
where H e g is the transformation matrix that converts δ L   δ λ   δ h to δ x e   δ y e   δ z e , which satisfies the following formula:
H e g = ( R N + h ) sin L cos λ ( R N + h ) cos L sin λ cos L cos λ ( R N + h ) sin L sin λ ( R N + h ) cos L cos λ cos L sin λ [ R N ( 1 f ) 2 + h ] cos L 0 sin L
where R N is the radius of curvature in prime vertical, f denotes the earth oblateness.

3.4. Description of the Cooperative Navigation Method

The structure of the heterogeneous air-ground vehicle formations cooperative navigation method proposed in this paper is given in Figure 5. Combining the methods and strategies proposed in Section 3.1, Section 3.2 and Section 3.3, the steps of the cooperative navigation method are as follows.
Step 1: Initialization
Initialization of the whole cooperative navigation system according to the constraints, such as motion characteristics and task requirements of the heterogeneous air-ground vehicle formations, including the establishment of the cost function, the initial model probability, and model transfer probability assignment in the IMM algorithm.
Step 2: Location estimation of all UGVs
At the current moment, using the two steps 1 and 2 described in Section 3.1, the position estimates x ^ i ( k | k 1 ) , i G of all UGVs are obtained.
Step 3: Establishment of UAVs’ search space and selection of the minimum cost combination
According to the position estimation value x ^ i ( k | k 1 ) of the UGV and the current position x α n of the UAV, the area to be extended is constructed by combining the relevant constraints, and the area to be extended of all UAVs is partitioned and combined into the search space. The generation value of all node combinations in the search space is calculated according to Equation (17), and the location combination with the lowest cost is selected as the target location of the UAV at the next moment. The UAV uses the target position for path planning and flight control.
Step 4: Inertial/co-ranging fusion filter of UGV
The UAVs continuously broadcasts range measurement signals and position information during flight. After receiving the signals and decoding the distance, the UGVs can fuse and filter them with their inertial guidance information according to Equations (21) and (22). It is worth noting that the filtering frequency need not be consistent with the frequency of the UAV path planning, and the frequency of the UAV path planning can be reduced in order to reduce the computation.
Step 5: IMM sub-filter filtering and fusion
After the position of the ground vehicle is corrected, the model parameters and model probabilities of the IMM algorithm are updated by using the corrected position and velocity as measurements and completing the two steps (3), (4) described in Section 3.1.
Step 6: Return to Step 2.

4. Simulation Results

4.1. Sensor Configuration and Simulation Scenario

In order to verify the effectiveness of the proposed method, heterogeneous air-ground vehicle formations are simulated for the scenario shown in Figure 1. This paper gives a simulation environment with five UAVs and three UGVs with a simulation time of 1200 s. The UAVs’ initial altitude distribution is from 1000 to 1500 m, and the unmanned vehicle performs horizontal orientation maneuvers on the ground without a wide range of changes in altitude. The initial positions of the UAVs and UGVs are shown in Figure 6.
All UAVs are equipped with high-precision navigation equipment, such as RTK, INS, and range and communicate with UGVs through wireless networks; UGVs are equipped with INS and use wireless range information to assist in positioning. Wireless ranging uses time division multiple access (TDMA) mode of operation, using time synchronization. In addition, signal arrival time measurement is an important means to achieve the ranging between UAVs and UGVs, and its ranging error source is mainly equipment delay. The simulation parameters of the sensors carried by the UAVs and UGVs are shown in Table 1.

4.2. Results and Analysis of the Simulation

Based on the above simulation conditions, the localization accuracy of the UGVs is simulated and analyzed. In order to verify the localization performance of the algorithm in this paper under the high dynamic motion state of multiple vehicles, three motion trajectories are designed for the UGVs. No. 1 and No. 3 UGVs are simulated in complex environments, such as urban alleys with high-speed, sharp turns characteristics, while the No. 2 UGV is in low-speed motion mode. The trajectories of UGVs are shown in Figure 7. To verify the effectiveness of the conversion of different motion models in the IMM algorithm, this UGV will have different motion modes, such as acceleration, uniform speed, and different time stagnation in the whole process. They are combined with the optimal configuration solution of UAVs and real-time position dynamic adjustment to complete the cooperative navigation and positioning of UGVs.
For the traditional space-based area cooperative navigation system, a nonlinear optimization algorithm is often used to solve the target positioning information, based on the idea of the satellite pseudo-range single-point positioning algorithm. In order to verify the representativeness of the cooperative navigation algorithm (DOC-CN) based on motion estimation and the regional real-time path planning strategy proposed in this paper, and also to verify the effect of UAVs’ configuration changes on the overall navigation and positioning performance of the formation, the text algorithm is compared with the direct localization method based on a two-step least square algorithm (TSLS) [50] and the fixed configuration cooperative navigation method (FC-CN) [51]. The GNSS, inertial sensor, and range sensor parameters are kept the same in the three algorithms. The simulation results of the positioning of UGVs are shown in Figure 8 and Figure 9.
As can be seen from Figure 8 and Figure 9, the cooperative navigation algorithm based on motion estimation and the regional real-time path planning strategy proposed in this paper has been significantly improved in terms of positioning accuracy compared with the remaining two navigation and positioning algorithms. The nonlinear filtering algorithm does not require high accuracy for initial value setting and can achieve fast convergence, however, its algorithm only solves the optimal solution from spatial measurements and ignores the temporal state correlation, so the localization error fluctuates more when relying only on the range value for cooperative navigation. The fixed configuration cooperative navigation algorithm is used to reduce the influence of the convergence target maneuver, but the overall positioning accuracy is lower than that of the cooperative navigation algorithm proposed in this paper.
The flight trajectory of the UAV optimized by the DOC-CN algorithm is shown in Figure 10, and the GDOP value is kept minimum during the flight. To quantitatively analyze the positioning errors of three UGVs under different methods, the root mean square error (RMSE) was used for the UGVs, and the results are shown in Table 2. The equation for the position estimation error in Table 2 is shown in Equation (26).
E = E r r E 2 + E r r N 2 + E r r U 2
where E r r E   E r r N and E r r U are the estimate errors in east, north, and up (ENU) directions, respectively.
From Table 2, it can be seen that the UGVs only use the TSLS direct localization algorithm in 3-D localization errors of 23.19 m~25.15 m. The interactive multi-model can improve UGVs’ positioning performance in high dynamic motion mode. The UGVs use the FC-CN localization algorithm in 3-D localization errors of 8.99 m~9.13 m, and the positioning accuracy is nearly 1.78 times better than traditional methods. On this basis, the cooperative navigation algorithm based on motion estimation and regional real-time path planning strategy proposed in the text maintains a positioning accuracy of 3.30 m~3.56 m, and the overall positioning accuracy is 7.3 times better than the original algorithm.
To demonstrate the overall improvement in the level of positioning performance of all UGVs in the formation, the cumulative distribution function is used to describe the probability distribution of the magnitude of all UGVs’ positioning errors, and 50 Monte Carlo experiments are conducted in this paper in order to reflect the stability of the algorithm in this paper. Figure 11 compares the cumulative distribution of the UAVs and UGVs positioning estimation errors. Using conventional methods, only 9.8% of the positioning error is less than 5 m. The percentage of less than 5 m localization error supported by the DOC-CN algorithm can reach 91.2%.
In summary, it can be demonstrated that the DOC-CN algorithm can significantly improve the positioning accuracy of UGVs in complex environments such as urban alleyways. This allows UGVs to obtain similar positioning performance as UAVs, thus improving the overall positioning performance of heterogeneous formations.

5. Conclusions

In this paper, for the situation of satellite navigation signal challenged in cities, hills, and valleys, we use the techniques of multi-dimensional sensing of navigation information, wireless ranging information interaction, and cooperative interoperability between heterogeneous air-ground vehicle formations to complete real-time navigation and positioning of UGVs. In this method, the DOC-CN algorithm is divided into three steps. First, the location of the UGV is predicted by the IMM algorithm. Then, aerial benchmarks are established by calculating cost functions and the path planning algorithm. Finally, the SINS solution platform is constructed to obtain the continuous position information of the UGVs.
The simulation shows that the DOC-CN algorithm proposed in this paper is significantly superior to that of traditional cooperative positioning methods such as TSLS and the FC-CN method. It can realize the navigation and positioning requirements of UGVs in a certain area under the GNSS-challenged environment and improve the overall formation’s positioning accuracy. Moreover, the next step is to embed the flight control program and navigation algorithm into the hardware platform and complete the practical validation.

Author Contributions

Conceptualization, C.S. and Z.X.; methodology, C.S.; software, C.S. and M.C.; validation, C.S.; formal analysis, C.S.; investigation, C.S.; resources, Z.X.; data curation, C.S. and M.C.; writing—original draft preparation, C.S.; writing—review and editing, C.S., Z.X., R.W. and J.X.; visualization, R.W. and J.X.; supervision, Z.X.; project administration, Z.X.; funding acquisition, Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 62073163, 62103285, 62203228), National Defense Basic Research Program (JCKY2020605C009), the Aeronautic Science Foundation of China (Grant No. ASFC-2020Z071052001, Grant No. 202055052003), and the Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications (Grant No. NY221137).

Data Availability Statement

Not applicable.

Acknowledgments

In the process of writing this thesis, I received a lot of support and encouragement from many people. First of all, I need to thank Mingxing Chen, who gave me a lot of inspiring methods in the communication and discussion with me; I also need to thank Zhi Xiong, Rong Wang and Jun Xiong for his advice in writing the thesis; finally, I would like to thank my parents for their silent help in my life.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Beyer, F.; Jurasinski, G.; Couwenberg, J.; Grenzdörffer, G. Multisensor data to derive peatland vegetation communities using a fixed-wing unmanned aerial vehicle. Int. J. Remote Sens. 2019, 40, 9103–9125. [Google Scholar] [CrossRef]
  2. Xiong, F.; Li, A.; Wang, H.; Tang, L. An SDN-MQTT based communication system for battlefield UAV swarms. IEEE Commun. Mag. 2019, 57, 41–47. [Google Scholar] [CrossRef]
  3. Tokekar, P.; Hook, J.V.; Mulla, D.; Isler, V. Sensor planning for a symbiotic UAV and UGV system for precision agriculture. IEEE Trans. Robot. 2016, 32, 1498–1511. [Google Scholar] [CrossRef]
  4. Lungu, M. Auto-landing of fixed wing unmanned aerial vehicles using the backstepping control. ISA Trans. 2019, 95, 194–210. [Google Scholar] [CrossRef] [PubMed]
  5. Caraballo, L.E.; Díaz-Báñez, J.M.; Fabila-Monroy, R.; Hidalgo-Toscano, C. Patrolling a terrain with cooperrative UAVs using random walks. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems, Atlanta, GA, USA, 11–14 June 2019; pp. 828–837. [Google Scholar]
  6. Alcaina, J.; Cuenca, Á.; Salt, J.; Zheng, M.; Tomizuka, M. Energy-efficient control for an unmanned ground vehicle in a wireless sensor network. J. Sens. 2019, 2019, 7085915. [Google Scholar] [CrossRef]
  7. Marconi, L.; Melchiorri, C.; Beetz, M.; Pangercic, D.; Siegwart, R.; Leutenegger, S.; Carloni, R.; Stramigioli, S.; Bruyninckx, H.; Doherty, P.; et al. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments. In Proceedings of the 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), College Station, TX, USA, 5–8 November 2012; pp. 1–4. [Google Scholar]
  8. Deusdado, P.; Pinto, E.; Guedes, M.; Marques, F.; Rodrigues, P.; Lourenço, A.; Mendonça, R.; Silva, A.; Santana, P.; Corisco, J.; et al. An aerial-ground robotic team for systematic soil and biota sampling in estuarine mudflats. In Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Volume 2; Springer International Publishing: Cham, Switzerland, 2015; pp. 15–26. [Google Scholar]
  9. Chung, T.H. Offensive Swarm-Enabled Tactics (Offset); DARPA: Arlington County, VA, USA, 2021. [Google Scholar]
  10. Sivaneri, V.O.; Gross, J.N. Flight-testing of a cooperative UGV-to-UAV strategy for improved positioning in challenging GNSS environments. Aerosp. Sci. Technol. 2018, 82, 575–582. [Google Scholar] [CrossRef]
  11. Jung, S.; Ariyur, K.B. Compensating UAV GPS data accuracy through use of relative positioning and GPS data of UGV. J. Mech. Sci. Technol. 2017, 31, 4471–4480. [Google Scholar] [CrossRef]
  12. Himmat, A.S.; Zhahir, A.; Ali, S.A.M.; Ahmad, M.T. Unmanned Aerial Vehicle Assisted Localization using Multi-Sensor Fusion and Ground Vehicle Approach. J. Aeronaut. Astronaut. Aviat. 2022, 54, 251–260. [Google Scholar]
  13. Nieuwenhuisen, M.; Droeschel, D.; Beul, M.; Behnke, S. Autonomous navigation for micro aerial vehicles in complex GNSS-denied environments. J. Intell. Robot. Syst. 2015, 84, 199–216. [Google Scholar] [CrossRef]
  14. Oguz-Ekim, P.; Ali, K.; Madadi, Z.; Quitin, F.; Tay, W.P. Proof of concept study using DSRC, IMU and map fusion for vehicle localization in GNSS-denied environments. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, 1–4 November 2016; pp. 841–846. [Google Scholar]
  15. Ko, K.S. A basic study on the jamming mechanisms and characteristics against gps/gnss based on navigation warfare. J. Navig. Port Res. 2010, 34, 97–103. [Google Scholar] [CrossRef] [Green Version]
  16. Jin, Z.; Wang, X.; Moran, B.; Pan, Q.; Zhao, C. Multi-region scene matching based localisation for autonomous vision navigation of UAVs. J. Navig. 2016, 69, 1215–1233. [Google Scholar] [CrossRef]
  17. Li, Y.; Ma, T.; Chen, P.; Jiang, Y.; Wang, R.; Zhang, Q. Autonomous underwater vehicle optimal path planning method for seabed terrain matching navigation. Ocean. Eng. 2017, 133, 107–115. [Google Scholar] [CrossRef]
  18. Xu, F.; Fang, J. Velocity and position error compensation using strapdown inertial navigation system/celestial navigation system integration based on ensemble neural network. Aerosp. Sci. Technol. 2008, 12, 302–307. [Google Scholar] [CrossRef]
  19. Wu, J.; Xiong, J.; Guo, H. Improving robustness of line features for VIO in dynamic scene. Meas. Sci. Technol. 2022, 33, 065204. [Google Scholar] [CrossRef]
  20. Wang, S.; Zhan, X.; Zhai, Y.; Shen, J.; Wang, H. Performance estimation for Kalman filter based multi-agent cooperative navigation by employing graph theory. Aerosp. Sci. Technol. 2021, 112, 106628. [Google Scholar] [CrossRef]
  21. Hoang, G.M.; Denis, B.; Härri, J.; Slock, D. Bayesian fusion of GNSS, ITS-G5 and IR–UWB data for robust cooperative vehicular localization. Comptes Rendus Phys. 2019, 20, 218–227. [Google Scholar] [CrossRef]
  22. Xiong, J.; Xiong, Z.; Cheong, J.W.; Xu, J.; Yu, Y.; Dempster, A.G. Cooperative positioning for low-cost close formation flight based on relative estimation and belief propagation. Aerosp. Sci. Technol. 2020, 106, 106068. [Google Scholar] [CrossRef]
  23. Vetrella, A.R.; Fasano, G.; Accardo, D. Cooperative navigation in GPS-challenging environments exploiting position broadcast and vision-based tracking. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems, Arlington, VA, USA, 7–10 June 2016; pp. 447–456. [Google Scholar]
  24. Indelman, V.; Gurfil, P.; Rivlin, E.; Rotstein, H. Distributed vision-aided cooperative localization and navigation based on three-view geometry. Robot. Auton. Syst. 2012, 60, 822–840. [Google Scholar] [CrossRef]
  25. Gao, Y.; Meng, X.; Hancock, C.M.; Stephenson, S.; Zhang, Q. UWB/GNSS-based cooperative positioning method for V2X applications. In Proceedings of the 27th International Technical Meeting of the Satellite Division of the Institute of Navigation, Tampa, FL, USA, 8–12 September 2014; pp. 3212–3221. [Google Scholar]
  26. Xiong, J.; Cheong, J.W.; Xiong, Z.; Dempster, A.G.; Tian, S.; Wang, R. Hybrid cooperative positioning for vehicular networks. IEEE Trans. Veh. Technol. 2019, 69, 714–727. [Google Scholar] [CrossRef]
  27. Chen, M.; Xiong, Z.; Liu, J.; Wang, R.; Xiong, J. Cooperative navigation of unmanned aerial vehicle swarm based on cooperative dilution of precision. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420932717. [Google Scholar] [CrossRef]
  28. Heng, L.; Gao, G.X. Accuracy of range-based cooperative positioning: A lower bound analysis. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2304–2316. [Google Scholar] [CrossRef]
  29. Huang, B.; Yao, Z.; Cui, X.; Lu, M. Dilution of precision analysis for GNSS collaborative positioning. IEEE Trans. Veh. Technol. 2015, 65, 3401–3415. [Google Scholar] [CrossRef]
  30. Causa, F.; Vetrella, A.R.; Fasano, G.; Accardo, D. Multi-UAV formation geometries for cooperative navigation in GNSS-challenging environments. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 23–26 April 2018; pp. 775–785. [Google Scholar]
  31. Sivaneri, V.O.; Gross, J.N. UGV-to-UAV cooperative ranging for robust navigation in GNSS-challenged environments. Aerosp. Sci. Technol. 2017, 71, 245–255. [Google Scholar] [CrossRef]
  32. Maddumabandara, A.; Leung, H.; Liu, M. Experimental evaluation of indoor localization using wireless sensor networks. IEEE Sens. J. 2015, 15, 5228–5237. [Google Scholar] [CrossRef]
  33. Wu, S.; Zhang, S.; Huang, D. A TOA-based localization algorithm with simultaneous NLOS mitigation and synchronization error elimination. IEEE Sens. Lett. 2019, 3, 1–4. [Google Scholar] [CrossRef]
  34. Kang, Y.; Wang, Q.; Wang, J.; Chen, R. A high-accuracy TOA-based localization method without time synchronization in a three-dimensional space. IEEE Trans. Ind. Inform. 2019, 15, 173–182. [Google Scholar] [CrossRef]
  35. Yang, M.; Jackson, D.R.; Chen, J.; Xiong, Z.; Williams, J.T. A TDOA localization method for nonline-of-sight scenarios. IEEE Trans. Antennas Propag. 2019, 67, 2666–2676. [Google Scholar] [CrossRef]
  36. Su, Z.; Shao, G.; Liu, H. Semidefinite programming for NLOS error mitigation in TDOA localization. IEEE Commun. Lett. 2017, 22, 1430–1433. [Google Scholar] [CrossRef]
  37. Blom, H.A.P.; Bar-Shalom, Y. The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Trans. Autom. Control 1988, 33, 780–783. [Google Scholar] [CrossRef]
  38. Seah, C.E.; Hwang, I. Algorithm for performance analysis of the IMM algorithm. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1114–1124. [Google Scholar] [CrossRef]
  39. Li, X.R.; Bar-Shalom, Y. Design of an interacting multiple model algorithm for air traffic control tracking. IEEE Trans. Control Syst. Technol. 1993, 1, 186–194. [Google Scholar] [CrossRef]
  40. Cai, L.; Jia, J.P. Wheeled Robot Path Tracking Study Based on IMM Algorithm. In Advanced Materials Research; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2014; Volume 1037, pp. 228–231. [Google Scholar]
  41. Liu, X.; Liu, X.; Zhang, W.; Yang, Y. Interacting multiple model UAV navigation algorithm based on a robust cubature Kalman filter. IEEE Access 2020, 8, 81034–81044. [Google Scholar] [CrossRef]
  42. Duchoň, F.; Babinec, A.; Kajan, M.; Beňo, P.; Florek, M.; Fico, T.; Jurišica, L. Path planning with modified a star algorithm for a mobile robot. Procedia Eng. 2014, 96, 59–69. [Google Scholar] [CrossRef] [Green Version]
  43. Gasparetto, A.; Boscariol, P.; Lanzutti, A.; Vidoni, R. Path planning and trajectory planning algorithms: A general overview. In Motion and Operation Planning of Robotic Systems; Springer: Cham, Switzerland, 2015; pp. 3–27. [Google Scholar]
  44. Qu, Y.-H.; Pan, Q.; Yan, J.-G. Flight path planning of UAV based on heuristically search and genetic algorithms. In Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, 2005—IECON 2005, Raleigh, NC, USA, 6–10 November 2005. [Google Scholar]
  45. Zhang, Z.; Wu, J.; Dai, J.; He, C. A novel real-time penetration path planning algorithm for stealth UAV in 3D complex dynamic environment. IEEE Access 2020, 8, 122757–122771. [Google Scholar] [CrossRef]
  46. Dobrokhodov, V. Cooperative path planning of unmanned aerial vehicles. J. Guid. Control Dyn. 2011, 34, 1601–1602. [Google Scholar] [CrossRef]
  47. Li, W.; Jia, Y.; Du, J. Distributed Kalman filter for cooperative localization with integrated measurements. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 3302–3310. [Google Scholar] [CrossRef]
  48. Wen, W.; Pfeifer, T.; Bai, X.; Hsu, L.T. Comparison of extended Kalman filter and factor graph optimization for GNSS/INS integrated navigation system. arXiv 2020, arXiv:2004.10572. [Google Scholar]
  49. Xiong, Z.; Chen, J.H.; Wang, R.; Liu, J.Y. A new dynamic vector formed information sharing algorithm in federated filter. Aerosp. Sci. Technol. 2013, 29, 37–46. [Google Scholar] [CrossRef]
  50. Xu, J.X.; Xiong, Z.; Chen, M.X.; Liu, J.Y. Regional navigation algorithm assited by locations of multi UAVs. Acta Aeronaut. Astronaut. Sin. 2018, 39, 322129. [Google Scholar]
  51. Tian, Y.; Yan, Y.P.; Zhong, Y.Q.; Li, J.X.; Meng, Z. Data fusion method based on IMM-Kalman for an integrated navigation system. J. Harbin Eng. Univ. 2022, 43, 973–978. [Google Scholar]
Figure 1. Schematic of cooperative navigation.
Figure 1. Schematic of cooperative navigation.
Remotesensing 15 02006 g001
Figure 2. Diagram of the UAV to be expanded area.
Figure 2. Diagram of the UAV to be expanded area.
Remotesensing 15 02006 g002
Figure 3. Horizontal section of the area to be expanded.
Figure 3. Horizontal section of the area to be expanded.
Remotesensing 15 02006 g003
Figure 4. Vertical section of the annular sector.
Figure 4. Vertical section of the annular sector.
Remotesensing 15 02006 g004
Figure 5. Cooperative navigation method structure.
Figure 5. Cooperative navigation method structure.
Remotesensing 15 02006 g005
Figure 6. Initial location of air-ground vehicle formations.
Figure 6. Initial location of air-ground vehicle formations.
Remotesensing 15 02006 g006
Figure 7. Trajectories of UGVs.
Figure 7. Trajectories of UGVs.
Remotesensing 15 02006 g007
Figure 8. Position error of UGV1.
Figure 8. Position error of UGV1.
Remotesensing 15 02006 g008
Figure 9. Position error of UGV2.
Figure 9. Position error of UGV2.
Remotesensing 15 02006 g009
Figure 10. Optimized trajectories of UAVs.
Figure 10. Optimized trajectories of UAVs.
Remotesensing 15 02006 g010
Figure 11. Positioning error CDF comparisons.
Figure 11. Positioning error CDF comparisons.
Remotesensing 15 02006 g011
Table 1. Sensor configuration and simulation parameters.
Table 1. Sensor configuration and simulation parameters.
SensorParameterValue
GNSSPosition noise standard dev.0.1 m
Initial clock difference40 ns
clock drift50 ns/s
Update Frequency1 Hz
GyroscopeRandom constant drift0.1 °/h
White noise0.1 °/h
First-order Markov drift0.1 °/h
First-order Markov correlation time3600 s
Update Frequency50 Hz
AccelerometerFirst-order Markov drift10−4 g
First-order Markov correlation time1800 s
Update Frequency50 Hz
Wireless rangingRanging noise10 m
Update Frequency1 Hz
Table 2. Statistics of position error.
Table 2. Statistics of position error.
NumberPosition ErrorRMSE/m
TSLSFC-CNDOC-CN
UGV1Longitude13.044.783.32
Latitude18.615.742.48
Altitude10.785.041.52
3-D25.159.004.41
UGV2Longitude13.394.931.98
Latitude17.065.582.69
Altitude8.215.041.22
3-D23.198.993.56
UGV3Longitude13.385.081.87
Latitude17.725.872.53
Altitude8.104.801.01
3-D23.649.133.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, C.; Xiong, Z.; Chen, M.; Wang, R.; Xiong, J. Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sens. 2023, 15, 2006. https://doi.org/10.3390/rs15082006

AMA Style

Shi C, Xiong Z, Chen M, Wang R, Xiong J. Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sensing. 2023; 15(8):2006. https://doi.org/10.3390/rs15082006

Chicago/Turabian Style

Shi, Chenfa, Zhi Xiong, Mingxing Chen, Rong Wang, and Jun Xiong. 2023. "Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy" Remote Sensing 15, no. 8: 2006. https://doi.org/10.3390/rs15082006

APA Style

Shi, C., Xiong, Z., Chen, M., Wang, R., & Xiong, J. (2023). Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sensing, 15(8), 2006. https://doi.org/10.3390/rs15082006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop