Next Article in Journal
Detection and Analysis of Radiation Doses in Multiple Orbital Space during Solar Minimum
Previous Article in Journal
Effects of Magnetic Field Gradient on the Performance of a Magnetically Shielded Hall Thruster
Previous Article in Special Issue
AI-Enabled Interference Mitigation for Autonomous Aerial Vehicles in Urban 5G Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cooperative Target Localization Method Based on UAV Aerial Images

1
School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China
2
Xi’an Institute of Modern Control Technology, Xi’an 710065, China
3
Unmanned System Research Institute, Northwestern Polytechnical University, Xi’an 710072, China
4
Shaanxi Key Laboratory of Aerospace Vehicle Design, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Aerospace 2023, 10(11), 943; https://doi.org/10.3390/aerospace10110943
Submission received: 15 August 2023 / Revised: 25 October 2023 / Accepted: 26 October 2023 / Published: 6 November 2023
(This article belongs to the Special Issue Global Navigation Satellite System for Unmanned Aerial Vehicle)

Abstract

:
A passive localization algorithm based on UAV aerial images and Angle of Arrival (AOA) is proposed to solve the target passive localization problem. In this paper, the images are captured using fixed-focus shooting. A target localization factor is defined to eliminate the effect of focal length and simplify calculations. To synchronize the positions of multiple UAVs, a dynamic navigation coordinate system is defined with the leader at its center. The target positioning factor is calculated based on image information and azimuth elements within the UAV photoelectric reconnaissance device. The covariance equation is used to derive AOA, which is then used to obtain the target coordinate value by solving the joint UAV swarm positional information. The accuracy of the positioning algorithm is verified by actual aerial images. Based on this, an error model is established, the calculation method of the co-localization PDOP is given, and the correctness of the error model is verified through the simulation of the Monte Carlo statistical method. At the end of the article, the trackless Kalman filter algorithm is designed to improve positioning accuracy, and the simulation analysis is performed on the stationary and moving states of the target. The experimental results show that the algorithm can significantly improve the target positioning accuracy and ensure stable tracking of the target.

1. Introduction

Reconnaissance-type UAVs are equipped with key features that enable them to locate targets quickly and accurately, as well as predict their behavior with precision. As technology has advanced, UAVs have become capable of multi-machine collaborative operations, thanks to bionic clustering and communication networking technologies. Optoelectronic information technology has also undergone significant development, resulting in the integration, miniaturization, and cost-effectiveness of airborne optoelectronic detection devices [1,2]. To further improve target localization accuracy, UAVs now employ a clustered approach to execute target localization and situational awareness duties [3].
In general, there are two types of UAV target localization techniques: active localization and passive localization. Active localization is the process of actively locating a target using a radio instrument, such as a UAV radar. The UAV actively ranges the target while actively positioning itself, which has a bigger impact on the UAV’s own concealing abilities and survivability [4,5]. By passively collecting target information rather than actively producing electromagnetic waves, lasers, etc., to obtain ranging information, passive placement helps to some extent, ensuring the safety of the UAV itself. According to the type of observation quantity, passive localization techniques are divided into several categories: primarily Collinear Equation, Image Matching, Binocular Vision 3D Localization, Doppler Rate of Frequency Change (DRC), Doppler Rate of Chang (DRC), Phase Difference Rate of Change (PDRC), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), Angle of Arrival (AOA), and other techniques [6,7]. The UAVs mentioned in this paper perform clustered localization tasks, and they distinguish themselves by being small, light, and having low power consumption, as well as better anti-jamming and stealthiness. To accommodate the UAV platform and usage needs, localization techniques need to be improved.
Collinear equation, image matching, and binocular vision 3D localization methods in the aforementioned passive localization are localization methods based on image information that can localize the target via a single image but with significant localization error. The flat terrain assumption, which is not always true in real-world application circumstances, is the foundation of the covariance equation approach. Although feature-based image matching is more efficient and gray-scale correlation-based image matching is more widely used, image-matching algorithms are more difficult to use, take longer to complete, and demand more computing resources, and thus they cannot be employed in situations where real-time performance is crucial. The secret to binocular or multicamera vision 3D localization is to shoot the target from various angles and acquire local feature points of the object, which cannot satisfy the measurement accuracy of further away targets due to the restriction of baseline distance. Although direction-finding cross-localization improves target maneuvering performance prediction, it has a significant flaw in multi-target localization and falls short of UAVs’ general criteria. In wireless sensor networks, where target information is typically received from sensors mounted on several observation points, methods like DRC, PDRC, TDOA, FDOA, and AOA are based on fast-improving localization algorithms [8].
To sum up, this work suggests an enhanced passive localization technique with the following key contributions based on the picture data obtained from aerial photography.
1.
The solution technique does not require the input of focal length and elevation information;
2.
Simultaneous localization of multiple targets is possible;
3.
The target localization error may be estimated based on the error component of each observation;
4.
The proposed traceless Kalman filtering approach can significantly increase the target localization and tracking accuracy while maintaining good robustness.
The article is organized as follows. Section 2 discusses the multi-UAV cooperative target localization method, including algorithm assumptions and a schematic depiction of the computational flow. Section 3 introduces the multi-UAV target cooperative localization algorithm. Section 4 examines the localization error and constructs a cooperative localization error model based on Section 3. Section 5 describes the traceless Kalman filter’s principles and computational methods. Section 6 simulates the algorithm’s correctness and highlights its benefits and drawbacks. Section 7 presents the conclusions.

2. Scenario Problem Description

2.1. Cooperative Target Localization Process for Multiple UAVs

The scenario is described in terms of multiple UAVs performing real-time reconnaissance and localization missions, as follows: The UAV is equipped with an electro-optical load to obtain wide-field of view, high-resolution infrared and visible image information, allowing both target and target-assisted localization to be performed.
Through mission mustering, multiple UAVs in the scenario area coordinate to pinpoint the objective. Within the electric-optic load action range, multiple UAVs gather the corresponding target pixel coordinates based on image data, sync the image data with the appropriate navigation data to determine the target’s relative position using the pertinent interior orientation data, and then convert the target’s absolute position data, The specific process is shown in Figure 1. A single UAV’s positioning process requires preassembled elevation or range information. We design a collaborative target placement solution approach in the absence of elevation information, taking into account the benefits of passive positioning and relative height measurement accuracy. Continuous tracking and gazing of the target is impossible due to the complex combat environment, but to take advantage of the UAV’s wide field of view for efficient reconnaissance in a limited time window, it is necessary to complete multiple target localization solutions based on multiple images. Furthermore, because absolute target position information is required, multiple UAVs should establish spatial relative relationships through position sharing prior to collaborative target localization, and the time uniformity problem is solved by synchronizing and fusing multiple information of respective UAVs.

2.2. Model Assumptions

The following assumptions are made in the above scenario problem:
(1)
Because the UAV’s camera center corresponds with the origin of the navigation coordinate system, any position mistake between them is ignored.
(2)
The UAV’s own location information is updated without delay;
(3)
The data link has no latency, a big bandwidth, and anti-interference properties to ensure that information is properly transferred.
(4)
The image’s optical distortion is ignored.
The input parameters for cooperative target localization of numerous UAVs are primarily separated into the following categories: UAV flight status parameters, navigation data, and picture data, among others. The output parameter is the location information of the target, and the specific calculation process is shown in Figure 2.

3. Multi-UAV Target Co-Location Modeling

The set of UAVs involved in cooperative positioning is denoted by S n = { S n j | j = 1 , 2 , , N } , where N is the total number of UAVs involved in localization; the set of targets that may be scouted and located is denoted by U T = { U T i | i = 1 , 2 , , K } , where K denotes the total number of targets that can be scouted and located.

3.1. WGS-84 Earth Ellipsoid Model

The Earth ellipsoid is a mathematically defined Earth surface that approximates the geodetic level and serves as the reference framework for geodesy and global positioning techniques [9]. This reference also displays the WGS-84 Earth ellipsoid model’s major parameters.

3.2. Synchronization and Updating of Observational Position

The coordinates of the UAV coordinate system are derived from the image and the cooperative positioning of the target by the UAV, but because the positions of numerous UAVs are continually changing, the positions of multiple UAVs must be synchronized and updated.
It is expected that each UAV may collect its own geodetic coordinates and share their position with one another. Localization UAVs are classified into two types: leaders and followers. The mission planning technique assures that there is always one leader in the system to participate in positioning while the rest of the UAVs are followers. The method described in [10] is used in this paper to pick the leader aircraft.
This work develops a dynamic navigation coordinate system to ease the calculation. The dynamic navigation coordinate system ( O n X n Y n Z n ) is defined as follows: the coordinate system’s origin is solidly connected to the camera center of the lead aircraft, the X n axis is positively pointing to the north, the Y n axis is in the plumb plane and positively pointing to the sky, and the Z n axis follows the right-hand rule. The positions of the other UAVs in the dynamic navigation coordinate system are dynamically updated as the leader’s position changes. Figure 3 depicts the position of each UAV in the dynamic navigation coordinate system at a given time.
The longitude, latitude, and altitude information for the UAV in the WGS-84 Earth ellipsoidal geodetic coordinate system is τ c ( j ) g = ( λ j , φ j , h j ) , ( j = 1 , 2 , , N ) , and its position in the dynamic navigation coordinate system is S n j , ( j = 1 , 2 , , N ) , where the lead aircraft coordinates are denoted as τ c ( 1 ) g = ( λ 0 ( 1 ) , φ 0 ( 1 ) , h 0 ( 1 ) ) and S n 1 , respectively, and the positions of other UAVs in the dynamic navigation coordinate system are shown in Equation (1):
S n j = C e n C g e ( τ c ( j ) g τ c ( 1 ) g ) , ( j = 2 , , N )
C g e = [ sin φ 0 ( 1 ) cos λ 0 ( 1 ) sin φ 0 ( 1 ) sin λ 0 ( 1 ) cos φ 0 ( 1 ) cos φ 0 ( 1 ) cos λ 0 ( 1 ) cos φ 0 ( 1 ) sin λ 0 ( 1 ) sin φ 0 ( 1 ) sin λ 0 ( 1 ) cos λ 0 ( 1 ) 0 ]
where C e n denotes the ratio transformation of the geographical Cartesian coordinate system to the navigation coordinate system.

3.3. Image Based Localization Factor Solution Method

The target positioning solution process has to specify the coordinate system, angle, and coordinate system conversion matrix for a single image. Six coordinate systems: carrier coordinate system, servo stabilization coordinate system, electric-optic load system, and pixel coordinate system, as well as parameters like aircraft attitude angle, electric-optic load installation angle, servo frame angle, and look-down angle [11], are involved in addition to the definition of the coordinate systems shown in Section 3.1 and Section 3.2.
By identifying the targets and using the coaxial image plane, the electric-optic load may be used to determine the pixel coordinates of each target. Through image detection data and optoelectronic device characteristics, the target information on the image can be determined. Give the symbol Θ T i j to this information and define it as a target positioning factor.
First, build the camera coordinate system as depicted in Figure 4 and use the UAV’s position S n j as the coordinate origin. In the navigation coordinate system, the coordinates of S n j are ( X n s ( j ) , Y n s ( j ) , Z n s ( j ) ) , those of the target point A ( i ) , which corresponds to the image point a ( i ) , are ( X n A ( i ) , Y n A ( i ) , Z n A ( i ) ) , and the inverse equation of the common line equation is as follows:
{ X n A ( i ) X n s ( j ) = ( Y n A ( i ) Y n s ( j ) ) c 11 ( j ) X s a ( i ) + c 12 ( j ) Y s a ( i ) + c 13 ( j ) Z s a ( i ) c 21 ( j ) X s a ( i ) + c 22 ( j ) Y s a ( i ) + c 23 ( j ) Z s a ( i ) Z n A ( i ) Z n s ( j ) = ( Y n A ( i ) Y n s ( j ) ) c 31 ( j ) X s a ( i ) + c 32 ( j ) Y s a ( i ) + c 33 ( j ) Z s a ( i ) c 21 ( j ) X s a ( i ) + c 22 ( j ) Y s a ( i ) + c 23 ( j ) Z s a ( i )
where C m n ( j ) ( m [ 1 , 3 ] , n [ 1 , 3 ] ) , the conversion matrix between the navigation coordinate system and the UAV camera coordinate system, corresponds to the coefficients of C c ( j ) n .
Set the UAV electric-optic load’s longitudinal and lateral resolution to P x V m a x ( j ) × P x U m a x ( j ) , the half field of view angles in the longitudinal and lateral directions to α ( j ) 1 / 2 × β ( j ) 1 / 2 , the physical dimensions of the image elements in the longitudinal and lateral directions to d v ( j ) × d u ( j ) , the focal length to f ( j ) , and using the definition in Figure 4 we can obtain f ( i ) = X s a ( i ) , and the principal point of the image’s position in the pixel coordinate system to ( u 0 ( j ) , v 0 ( j ) ) . Target’s pixel coordinates are ( u i , v i ) , ( i = 1 , 2 , , K ) , image point a ( i ) ’s camera coordinates are ( X s a ( i ) , Y s a ( i ) , Z s a ( i ) ) , and the transformation between the two-dimensional image coordinate system and the camera coordinate system can be expressed as follows:
1 X s a ( i ) ( X s a ( i ) Y s a ( i ) Z s a ( i ) ) = ( 1 ( v i v 0 ) d v ( j ) f ( j ) ( u i u o ) d u ( j ) f ( j ) )
The following formula can be derived in accordance with Figure 4 and its definition.
d v ( j ) f ( j ) = 2 tan ( α ( j ) 1 / 2 ) P x V max ( j )
d u ( j ) f ( j ) = 2 tan ( β ( j ) 1 / 2 ) P x U max ( j )
The desired placement factor is the right-hand side of Equation (4), and the organized expression is as follows:
Θ T i j = [ 1 ( v i v 0 ( j ) ) 2 tan ( α ( j ) 1 / 2 ) P x V max ( j ) ( u i u 0 ( j ) ) 2 tan ( β ( j ) 1 / 2 ) P x U max ( j ) ]
Define C c ( j ) n = [ ω n j T κ n j T ρ n j T ] T , where ω n j = ( C 11 ( j ) C 12 ( j ) C 13 ( j ) ) , κ n j = ( C 21 ( j ) C 22 ( j ) C 23 ( j ) ) , and ρ n j = ( C 31 ( j ) C 32 ( j ) C 33 ( j ) ) . We can obtain the following by converting Equation (3):
{ X n A ( i ) X n s ( j ) Y n A ( i ) Y n s ( j ) = ω n j × Θ T i j κ n j × Θ T i j Z n A ( i ) Z n s ( j ) Y n A ( i ) Y n s ( j ) = ρ n j × Θ T i j κ n j × Θ T i j
The internal orientation components and target pixel coordinates used in the UAV S n j ’s reconnaissance of the target U T i are represented by Θ T i j in Equation (7). It is clear that for a specific kind of electric-optic load, the internal orientation elements α ( j ) 1 / 2 , β ( j ) 1 / 2 , P x V m a x ( j ) , P x U m a x ( j ) , u 0 ( j ) and v 0 ( j ) have constant values and that the localization factor only varies with the target pixel coordinates and is unaffected by the actual size and focal length of the image element.

3.4. Image-Based AOA Vector Solution Process

The AOA can be solved if the current position and attitude of the UAV and the orientation factor within the target are known as follows:
V i j = [ θ i j ( k ) , ψ i j ( k ) ] T represents the AOA Vector. The following equation is obtained using Equation (8).
tan V i j = [ σ t a r g e t ( i ) 1 ζ i j 2 ε i j ]
ζ i j = [ ω n j   ×   Θ T i j κ n j   ×   Θ T i j ρ n j   ×   Θ T i j κ n j   ×   Θ T i j ] T
ε i j = ω n j × Θ T i j ρ n j × Θ T i j
From Equation (9) we obtain:
V i j = [ σ t a r g e t ( i ) tan 1 ( 1 ζ i j 2 ) tan 1 ( ε i j ) ]
Define σ t a r g e t ( i ) as follows:
σ t a r g e t ( i ) = { 1 , A e r i a l   T a r g e t 1 , G r o u n d   T a r g e t
As evident from Equations (9)–(13), targets positioning factor Θ T i j , the transformation matrix C c ( j ) n and the target coefficient σ t a r g e t ( i ) have the biggest effects on the AOA vector.
This paper performs ground reconnaissance operations with σ t a r g e t ( i ) set to −1.
The AOA Vector for N UAVs is:
V S N = [ V i 1 , , V i j , V i N ] T

3.5. Co-Location Solution Model

The targets’ location coordinates are U T i = [ x T i , y T i , z T i ] T ( i = 1 , 2 , , K ) . Figure 3 depicts the relationship between the UAV S n j and the target U T i , with R i j serving as a measure of their separation.
According to Figure 3 and Equations (8) and (12), the following equation can be obtained:
U T i S n j = R i j [ cos θ i j sin ψ i j sin θ i j cos θ i j cos ψ i j ]
Define τ θ i j , ψ i j = [ τ 1 θ i j , ψ i j τ 2 θ i j τ 3 θ i j , ψ i j ] T , then:
[ τ 1 θ i j , ψ i j τ 2 θ i j τ 3 θ i j , ψ i j ] = [ cos θ i j sin ψ i j sin θ i j cos θ i j cos ψ i j ]
The procedure suggested in [12] allows us to roughly eliminate R i j :
Φ i j × U T i = Φ i j × S n j , ( i = 1 , 2 , , K , j = 1 , 2 , , N )
Φ i j = ( ( τ 1 θ i j , ψ i j ) 2 1 τ 1 θ i j , ψ i j τ 2 θ i j τ 1 θ i j , ψ i j τ 3 θ i j , ψ i j ( τ 2 θ i j ) 2 1 τ 2 θ i j τ 3 θ i j , ψ i j ( τ 3 θ i j , ψ i j ) 2 1 )
where Φ i j is the symmetric matrix and r a n k ( Φ i j ) = 2 , and thus Equation (17) requires more equations than necessary to meet the target position’s solution.
For UAVs involved in localization S n , there exists:
[ Φ i 1 Φ i 2 Φ i N ] × U T i = [ Φ i 1 × S n 1 Φ i 2 × S n 2 Φ i N × S n N ] , N 2
It is possible to determine the target location coordinates by using Equation (1), Equation (7), Equation (12) and Equation (19). By converting the Earth’s Cartesian coordinate system to the geodetic coordinate system and iteratively calculating the final geodetic coordinates, satisfying the accuracy longitude, latitude, and altitude as ( λ T i , ϕ T i , h T i ) , and the result U T i is the coordinates of the navigation system of each target calculated with the coordinates of the UAV S n j in the navigation coordinate system as the reference point. After converting the measure to degrees, the targets’ Earth coordinates are finally discovered.

4. Collaborative Positioning Error Model

4.1. AOA Error Model Based on Image

The look-down angle of the electric-optic load is θ x s j , with error δ θ x s j ( k ) , the installation angle of the electric-optic load is [ ϕ p j ( k ) , ϑ p j ( k ) , γ p j ( k ) ] , with error [ δ ϕ p j ( k ) , δ ϑ p j ( k ) , δ γ p j ( k ) ] , the yaw, pitch, and roll angles at the camera moment are [ ϕ b j ( k ) , ϑ b j ( k ) , γ b j ( k ) ] , with measurement errors [ δ ϕ b j ( k ) , δ ϑ b j ( k ) , δ γ b j ( k ) ] , the altitude and azimuth angles of the frame are [ θ c j ( k ) , ψ c j ( k ) ] , with error [ δ θ c j ( k ) , δ ψ c j ( k ) ] , and let the true value of the pixel coordinates of the target observed by UAV S n j at time k. It should be noted that since the navigation device and the electric-optic load are solidly coupled to reduce error, the installation angle is 0.
Then the observation vector is:
L ^ k = [ θ ^ b j ( k ) , ϕ ^ b j ( k ) , γ ^ b j ( k ) , θ ^ c j ( k ) , ϕ ^ c j ( k ) , θ ^ x s j ( k ) ] T
The observation measurement error is:
δ L k = [ δ θ b j ( k ) , δ ϕ b j ( k ) , δ γ b j ( k ) , δ θ c j ( k ) , δ ϕ c j ( k ) , δ θ x s j ( k ) ] T
At time k, The measured value of AOA Vector V S N is V ^ k , The true value of AOA Vector V S N is V k , then we get:
V ^ k = [ V ^ i 1 ( k ) , , V ^ i j ( k ) , V ^ i N ( k ) ] T
V k = V ^ k + J k × δ L k + σ k J
where J k is the Jacobian matrix, σ k J the residual vector, and each value in σ k J is the residual of a single observation from its standardized value.
The calculation of the matrix J k is performed below.
To facilitate the calculation, set the auxiliary variable as M a u x = [ X ¯ j , Y ¯ j , Z ¯ j ] T , Θ T i j is normalized to obtain unit vector τ ˜ c ( i ) , and τ ˜ c ( i ) is transformed by C c ( j ) n :
M a u x = [ X ¯ j Y ¯ j Z ¯ j ] = C c ( j ) n τ ˜ c ( i ) ( i = 1 , 2 , , M ) , ( j = 1 , 2 , , N )
Equation (24) is transformed to obtain a new expression for the AOA Vector.
V k = [ arcsin Y ¯ j arctan X ¯ j Z ¯ j arcsin Y ¯ N arctan X ¯ N Z ¯ N ] T
A linearized transformation of Equation (23) yields J k :
J k = [ f 1 ( θ b 1 ) f 1 ( ϕ b 1 ) f 1 ( γ b 1 ) f 1 ( θ c 1 ) f 1 ( ϕ c 1 ) f 1 ( θ x s 1 ) g 1 ( θ b 1 ) g 1 ( ϕ b 1 ) g 1 ( γ b 1 ) g 1 ( θ c 1 ) g 1 ( ϕ c 1 ) g 1 ( θ x s 1 ) f j ( θ b j ) f j ( ϕ b j ) f j ( γ b j ) f j ( θ c j ) f j ( ϕ c j ) f j ( θ x s j ) g j ( θ b j ) g j ( ϕ b j ) g j ( γ b j ) g j ( θ c j ) g j ( ϕ c j ) g j ( θ x s j ) ] d e f i n e : f j ( x ) = K j Y ¯ j υ j     , υ j = θ b j , ϕ b j , γ b j , θ p j , ϕ p j , γ p j , θ c j , ϕ c j , θ x s j g j ( x ) = W j ( X ¯ j υ j Z ¯ j Z ¯ j υ j X ¯ j )     , υ j = θ b j , ϕ b j , γ b j , θ p j , ϕ p j , γ p j , θ c j , ϕ c j , θ x s j
where: K j = 1 1 Y ¯ j 2 , W j = 1 X ¯ j 2 + Z ¯ j 2
M a u x υ j = C c ( j ) n υ j τ ˜ c ( i ) , υ j = θ b j , ϕ b j , γ b j , θ p j , ϕ p j , γ p j , θ c j , ϕ c j , θ x s j

4.2. Collaborative Positioning Error Model Based on PDOP

Position Dilution of Precision, or PDOP, is simply a measure of how accurate a location is, and in a satellite positioning system, the degree of the PDOP value indicates how well-distributed the satellite terminals are [13]. During the cooperative positioning of several UAVs, the DOP value of each measurement site in the reconnaissance region is influenced, with reduced DOP values frequently resulting in higher positioning accuracy [14]. In order to analyze the error range of cooperative positioning and to suggest optimization ideas, PDOP simulation and analysis of the cooperative positioning of numerous UAVs are employed in this study. The derivation formula in question is displayed below.
Assuming that the target is indeed in the position [ x T i , y T i , z T i ] T , ( i = 2 , 3 , , M ) , the target solution is in the position [ x ^ T i , y ^ T i , z ^ T i ] T , ( i = 2 , 3 , , M ) , and we obtain:
δ X k = [ x T i y T i z T i ] [ x ^ T i y ^ T i z ^ T i ]
V k = V ^ k + H k × δ X k + σ k H
H k is the Jacobian matrix, and σ k H is a vector of residuals between the observed values and the standard values. Each value in this vector represents the residual of a single observation compared to its standard value.
According to Equation (29), H k is obtained as:
H k = [ ( x T i x n 1 ) ( y T i y n 1 ) L 1 r 1 2 L 1 r 1 2 ( z T i z n 1 ) ( y T i y n 1 ) L 1 r 1 2 z T i z n 1 L 1 2 0 ( x T i x n 1 ) L 1 2 s i g n ( z T i z n 1 ) ( x T i x n j ) ( y T i y n j ) L j r j 2 L j r j 2 ( z T i z n j ) ( y T i y n j ) L j r j 2 z T i z n j L j 2 0 ( x T i x n j ) L j 2 s i g n ( z T i z n j ) ]
The covariance array of the error δ X k is:
G δ X = E [ δ X k δ X k T ] = B δ X { E [ δ V k δ V k T ] } B δ X T
B δ X = ( H k T H k ) 1 H k T
P D O P = t r a c ( G δ X )
The PDOP solution procedure for cooperative localization of multiple UAV targets is provided by Equations (30)–(33).

4.3. Error Analysis

There are two sections to the error analysis. One is the external orientation element, or the UAV’s flying status at the time of target launch, as illustrated in Table 1. The other is the internal orientation element, which is represented by the target pixel coordinates at the time of target launch in Table 2 and Table 3, which displays the distribution of the measurement errors for the external orientation element.
The simulation is run under the conditions indicated in Table 1 to ensure that the error model is accurate, and the additional parameters are shown in Table 2 and Table 3.
The location factor can be calculated as Θ T i j [ 1 0 0 ] T from Equation (7), assuming that the target is close to the center of the image. According to the circumstances outlined in [11], the error when the target is at the edge of the image is greater than the error when the target is in the center of the image, so four places in Figure 5A–D were chosen for a Monte Carlo simulation using arithmetic. The look-down angle of view was set at −90°, and the relative altitude of the flight was 3000 m.
Figure 6, Figure 7, Figure 8 and Figure 9 display the distribution of AOA errors for the target sites when the parameter values in Table 1 are used. Table 4 displays the computation results and observational parameters.
According to the experiments, under ideal flight conditions, the target’s image-based AOA azimuth angle and altitude angles should both not exceed 2.73° and 0.81°, respectively.
Based on the calculation method of PDOP value proposed in Section 4.1 and Section 4.2, the multi-aircraft cooperative localization error analysis is carried out. Firstly, two UAVs are set up for target localization; the position coordinates of the UAVs are shown in Table 5, the position error is 0 m, and the flight altitude is 3000 m. According to the parameters in Table 2, it can be known that the size of the corresponding area of the captured image is calculated to be about 1400 m × 1200 m when flying at a relative altitude of 3000 m. In this area, the PDOP value under any position can be calculated and plotted into a contour distribution map, as shown in Figure 10.
The results of cooperative localization of two UAVs show that when the distance between UAVs is 200 m and 100 m, the minimum value of PDOP distribution is 50 m and 100 m, respectively, i.e., the smaller the spacing is, the larger the localization error is. In addition, the target localization accuracy is also related to the distribution of UAVs; therefore, four UAVs are set up for target localization, and their errors are analyzed. Assuming that four UAVs fly at a relative height of 3000 m, the position coordinates in the 2D plane are shown in Table 6. The calculation results are shown in Figure 11 and Figure 12.
Figure 11 and Figure 12 compare the PDOP values of four unmanned aerial vehicles’ target collaborative positioning under different flying modes with Monte Carlo shooting simulation results. The errors between the two are roughly equal, with a minimum placement inaccuracy of about 20 m.
In summary, utilizing the collaborative positioning model of many unmanned aerial vehicles, a PDOP-based error computation model was constructed, and it was validated using Monte Carlo simulation. The simulation findings show that the baseline has a significant impact on the collaborative positioning error of the two UAVs, with a minimal positioning error of roughly 50 m. The collaborative positioning error of four unmanned aerial vehicles is less affected by the formation mode under the same conditions, with a minimal positioning error of roughly 20 m. Filtering methods can increase the accuracy of error models.

5. Target Localization Method Based on Cubature Kalman Filter

Nonlinear filtering is used because the observation functions of both the time difference and the measured AOA observations are nonlinear functions [15,16]. Because of its linearization procedure, the Extended Kalman Filter (EKF) can only achieve a greater filtering performance provided the linearization error of the system’s state and observation equations is small [17,18]. The particle filter (PF) algorithm, which has recently been developed, is a good algorithm for solving the nonlinear estimation problem [19,20]. As a result, the improvements in localization error by the EKF and PF algorithms were compared.
Arasaratnam and Haykin et al. proposed the Cubature Kalman Filter (CKF) algorithm [21] to solve the integration problem of nonlinear functions in filtering algorithms, which is similar to the Unscented Kalman Filter (UKF), first calculates the sampling points (called volume points), then calculates the one-step prediction of the volume points by the state equation, and then corrects the predicted value of the state by the quantitative update and the Kalman gain calculation. In comparison to the Unscented Kalman filter algorithm, the Cubature Kalman Filter algorithm obtains the volume points by calculating the spherical radial volume criterion without linearizing the state equation and directly transferring the volume points by the nonlinear state equation while ensuring that the weights are always positive. This improves the algorithm’s robustness and accuracy [22,23,24].
According to the third-order spherical radial criterion, the number of volume points for an n-dimensional state vector is m = 2 n , and the set of volume points is designated as:
ξ j = m 2 [ 1 ] j , j = 1 , 2 , , 2 n
where [ 1 ] j denotes the j t h volume point, i.e., the j t h column of [ 1 ] , and [ 1 ] can be expressed as:
[ 1 ] = [ ( 1 0 0 ) ( 0 0 1 ) ( 1 0 0 ) ( 0 0 1 ) ]
The weights of each volume point are equal, as written:
ω j = 1 m
For the following target state equation and the measurement equation:
X k = f ( X k 1 , u k 1 ) + w k 1 Z k = h ( X k , u k ) + v k
The Cubature Kalman Filtering algorithm and the specific process are given below:
Step 1: Calculation of volume points.
{ P k 1 , k 1 = S k 1 S k 1 T χ ˜ k 1 j = X ^ k 1 + S k 1 ξ j
where a Cholesky decomposition of P k 1 , k 1 gives S k 1 .
Step 2: One-step prediction of volume points.
χ ˜ k , k 1 j = f ( χ ˜ k 1 j )
Step 3: Compute one-step prediction and covariance matrix of state quantities.
{ X ^ k , k 1 = j = 1 m χ ˜ k , k 1 j m P k , k 1 = 1 m j = 1 m w j χ ˜ k , k 1 j ( χ ˜ k , k 1 j ) T X ^ k , k 1 X ^ k , k 1 T + Q k 1
Step 4: Calculation of new volume points based on one-step predicted values.
{ P k , k 1 = S k , k 1 S k , k 1 T χ ˜ k , k 1 j = X ^ k , k 1 + S k , k 1 ξ j
Step 5: Observation prediction for new volume points.
Z k , k 1 j = h ( χ ˜ k , k 1 j )
Step 6: Calculate the mean and covariance of the target observations weighted by the observation predictions of the volume points.
Z ^ k , k 1 = j = 1 m Z k , k 1 j m
P x z = 1 m j = 1 m χ ˜ k , k 1 j ( Z k , k 1 j ) T X ^ k , k 1 Z ^ k , k 1 T
P z z = 1 m j = 1 m Z k , k 1 j ( Z k , k 1 j ) T Z ^ k , k 1 Z ^ k , k 1 T + R k
Step 7: Calculating Kalman gain.
K k = P x z P z z 1
Step 8: Calculate system state update and covariance update.
{ X ^ k = X ^ k , k 1 + K k ( Z k Z ^ k , k 1 ) P k = P k , k 1 K k P z z K k T
The flow of the Cubature Kalman Filtering algorithm is shown in Figure 13.

6. Simulation and Analysis

6.1. Co-Localization Algorithm Verification

An external field test was performed to validate the co-localization algorithm’s correctness. For the same ground identifier, twelve groups of UAV aerial photographs were selected under varied working conditions, with two images in each group, and the target location was solved separately, yielding a total of six groups of target coordinate values. An example of a group of aerial images is shown in Figure 14.
Figure 15 shows the target localization results (the status of UAV collaborative target localization is shown in Table 7, The calculation results of the target position in this state are shown in Table 8). The blue solid dots in the figure represent the true position of the ground identification (target), while the red solid dots represent the results of collaborative target localization by two machines. There are a total of six sets of data. It can be seen that the algorithm can accurately calculate the position of the target in the two-dimensional plane.

6.2. Multi-UAV Co-Location and Tracking

Simulations for fixed and moving targets are discussed in this section, and these were used to validate the effectiveness of CKF. During the process of target discovery and localization by the four UAV observatories, the UAV moves along a specific trajectory and makes numerous observations of the target area. The observations from the first measurement are combined with the initial position where the UAV observatory begins positioning to obtain the initial position estimate of the target and the corresponding covariance matrix of the zero-mean estimation error as the initial estimate of the filtering algorithm, and the filtering algorithm is then used to process the multiple observations to obtain a more accurate estimate of the target.
Figure 3 shows the NUE (North-Up-East) coordinate system O n X n Y n Z n ,and Table 9 shows the beginning condition of the ground target, data related to the number of UAVs, initial status, and measurement errors.
Ref. [25] provides the basic motion model of the target, and the efficiency of the Unscented Kalman Filter-based target localization approach is validated in this study using the target’s constant linear and rotating motion.
The discrete constant velocity linear and rotating motion models of the target are shown in the following equations:
x k c v = Φ k c v x k 1 c v + W k c v
x k c t = Φ k c t x k 1 c t + W k c t
where x k c v = [ x k x ˙ k ] T and x k c t = [ x ˙ k x ˙ k y k y ˙ k ] T are the state vectors of the target’s constant linear and turning motion models, Φ k c v and Φ k c t the status transition matrices, and W k c v , W k c t the system noise.
To model the positional adequacy and location error of the UAVs, a random normal error with a mean value of 0 and a standard deviation of 10 m is added to the position of the UAVs’ path. Figure 16 depicts the error between the measuring position and the real position of the UAV.
Four UAVs approach the target from various angles. Figure 17 depicts the position result for a stationary object. The target position’s positioning deviance quickly converges from tens of meters to less than ten meters.
Assume that the target moves in a straight path and turns in a straight line and that one revolution of the motion takes 80 s. Four UAVs are programmed to proceed toward the target’s initial position to finish the target’s continual localization and tracking. Figure 18 depicts the target’s and UAVs’ respective motion trajectories and tracking at a certain time.
To account for the interference of random perturbations on positioning outcomes, 100 Monte Carlo simulations were run, and the RMSE (Root Mean Square Error) was employed as the accuracy judgment measure.
Figure 19 shows the measurement results of the position component when tracking a moving target, and it can be seen that the algorithm proposed in this paper can quickly converge and stably trac the target. Figure 20 shows the position and velocity RMS errors when tracking a moving target, and the tracking accuracy gradually improves with the change of time, the position error converges within 12 m, and the velocity error converges within 0.5 m/s, which also shows that the algorithm has high accuracy under external interference.

7. Conclusions

This paper’s target co-localization approach is a localization method that does not rely on elevation and ranging information. It can calculate the positions of many targets at once, considerably improving UAV detecting capability. The approach is almost hardware-independent and thus appropriate for low-cost small UAV cluster systems. The error model is used in this study to examine the lowest target positioning error of 20 m at 3000 m relative flight altitude under typical flight conditions. This paper uses the traceless Kalman filtering algorithm to simulate and verify the stationary and moving targets, respectively, and the target localization accuracy is improved by 40% compared to the original one, and the target can be continuously tracked in the case of interference with a high degree of accuracy guaranteed.

Author Contributions

Conceptualization, M.D.; methodology, M.D.; software, M.D., T.W. and K.Z.; validation, M.D., H.Z. and T.W.; formal analysis, M.D. and K.Z.; investigation, T.W.; resources, H.Z.; data curation, H.Z.; writing—original draft preparation, M.D. and H.Z.; writing—review and editing, M.D. and H.Z.; visualization, M.D. and K.Z.; supervision, M.D.; project administration, M.D.; funding acquisition, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author, Minglei Du, upon reasonable request.

Conflicts of Interest

No potential conflict of interest was reported by the authors.

References

  1. Chen, W.-C.; Lin, C.-L.; Chen, Y.-Y.; Cheng, H.-H. Quadcopter Drone for Vision-Based Autonomous Target Following. Aerospace 2023, 10, 82. [Google Scholar] [CrossRef]
  2. Elmeseiry, N.; Alshaer, N.; Ismail, T. A Detailed Survey and Future Directions of Unmanned Aerial Vehicles (UAVs) with Potential Applications. Aerospace 2021, 8, 363. [Google Scholar] [CrossRef]
  3. Cai, Y.; Guo, H.; Zhou, K.; Xu, L. Unmanned Aerial Vehicle Cluster Operations under the Background of Intelligentization. In Proceedings of the 2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM), Manchester, UK, 23–25 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 525–529. [Google Scholar] [CrossRef]
  4. Chen, X.; Qin, K.; Luo, X.; Huo, H.; Gou, R.; Li, R.; Wang, J.; Chen, B. Distributed Motion Control of UAVs for Cooperative Target Location Under Compound Constraints. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 576–580. [Google Scholar] [CrossRef]
  5. Kim, B.; Pak, J.; Ju, C.; Son, H.I. A Multi-Antenna-based Active Tracking System for Localization of Invasive Hornet Vespa velutina. In Proceedings of the 2022 22nd International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1693–1697. [Google Scholar] [CrossRef]
  6. Li, H.; Fan, X.; Shi, M. Research on the Cooperative Passive Location of Moving Targets Based on Improved Particle Swarm Optimization. Drones 2023, 7, 264. [Google Scholar] [CrossRef]
  7. Wang, S.; Li, Y.; Qi, G.; Sheng, A. Sensor Selection and Deployment for Range-Only Target Localization Using Optimal Sensor-Target Geometry. IEEE Sens. J. 2023, 23, 21757–21766. [Google Scholar] [CrossRef]
  8. Zhao, Y.; Hu, D.; Zhao, Y.; Liu, Z. Moving target localization for multistatic passive radar using delay, Doppler and Doppler rate measurements. J. Syst. Eng. Electron. 2020, 31, 939–949. [Google Scholar] [CrossRef]
  9. Sudano, J.J. An exact conversion from an Earth-centered coordinate system to latitude, longitude and altitude. In Proceedings of the IEEE 1997 National Aerospace and Electronics Conference, NAECON 1997, Dayton, OH, USA, 14–17 July 1997; IEEE: Piscataway, NJ, USA, 1997; Volume 2, pp. 646–650. [Google Scholar] [CrossRef]
  10. Wang, M.; Zhang, D.; Tang, S.; Xu, B.; Zhao, J. Online Task Planning Method for Drone Swarm based on Dynamic Coalition Strategy. Acta Armamentarii 2023, 44, 2207–2223. [Google Scholar]
  11. Du, M.; Li, S.; Zheng, K.; Li, H.; Che, X. Target Location Method of Small Unmanned Reconnaissance Platform Based on POS Data. In Proceedings of the 2021 International Conference on Autonomous Unmanned Systems, Changsha, China, 24–26 September 2021. [Google Scholar]
  12. Wu, L.; Wang, B.; Wei, J.; He, S.; Zhao, Y. Dual-aircraft passive localization model based on AOA and its solving method. Syst. Eng. Electron. Technol. 2020, 42, 978–986. [Google Scholar]
  13. Fan, B.; Li, G.; Li, P.; Yi, W.; Yang, Z. Research and Application of PDOP Model for Laser Interferometry Measurement of Three-Dimensional Point Coordinates. Surv. Mapp. Bull. 2015, 11, 28–31. [Google Scholar]
  14. Yang, K.; Huang, J. Positioning Accuracy Evaluation of Satellite Navigation Systems. Mar. Surv. Mapp. 2009, 29, 26–28. [Google Scholar]
  15. Qin, Y.; Zhang, H.; Wang, S. Kalman Filter and Combined Navigation Principles; Northwestern Polytechnical University Press: Xi’an, China, 2015. [Google Scholar]
  16. Neusypin, K.; Kupriyanov, A.; Maslennikov, A.; Selezneva, M. Investigation into the nonlinear Kalman filter to correct the INS/GNSS integrated navigation system. GPS Solut. 2023, 27, 91. [Google Scholar] [CrossRef]
  17. Gong, B.; Wang, S.; Hao, M.; Guan, X.; Li, S. Range-based collaborative relative navigation for multiple unmanned aerial vehicles using consensus extended Kalman filter. Aerosp. Sci. Technol. 2021, 112, 106647. [Google Scholar] [CrossRef]
  18. Easton, P.; Kalin, N.; Joshua, M. Invariant Extended Kalman Filtering for Underwater Navigation. IEEE Robot. Autom. Lett. 2021, 6, 5792–5799. [Google Scholar]
  19. Yue, J.; Wang, H.; Zhu, D.; Aleksandr, C. UAV formation cooperative navigation algorithm based on improved particle filtering. Chin. J.f Aeronaut. 2023, 44, 251–262. [Google Scholar]
  20. Liu, Y.; He, Z.; Lu, Y.; Di, K.; Wen, D.; Zou, X. Autonomous navigation and localization in IMU/UWB group domain based on particle filtering. Transducer Microsyst. Technologies. 2022, 41, 47–50. [Google Scholar]
  21. Ienkaran, A.; Simon, H. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar]
  22. Luo, Q.; Shao, Y.; Li, J.; Yan, X.; Liu, C. A multi-AUV cooperative navigation method based on the augmented adaptive embedded cubature Kalman filter algorithm. Neural Comput. Appl. 2022, 34, 18975–18992. [Google Scholar] [CrossRef]
  23. Liu, W.; Shi, Y.; Hu, Y.; Hsieh, T.H.; Wang, S. An improved GNSS/INS navigation method based on cubature Kalman filter for occluded environment. Meas. Sci. Technol. 2023, 34, 035107. [Google Scholar] [CrossRef]
  24. Gao, B.; Hu, G.; Zhang, L.; Zhong, Y.; Zhi, X. Cubature Kalman filter with closed-loop covariance feedback control for integrated INS/GNSS navigation. Chin. J. Aeronaut. 2023, 36, 363–376. [Google Scholar] [CrossRef]
  25. Jin, G.; Tan, L. Targeting Technology for Unmanned Reconnaissance Aircraft Optronic Platforms; Xi’an University of Electronic Science and Technology Press: Xi’an, China, 2012. [Google Scholar]
Figure 1. Schematic diagram of the process of Multi-UAV performing target positioning tasks.
Figure 1. Schematic diagram of the process of Multi-UAV performing target positioning tasks.
Aerospace 10 00943 g001
Figure 2. Flow chart of target co-location calculation.
Figure 2. Flow chart of target co-location calculation.
Aerospace 10 00943 g002
Figure 3. Schematic diagram of position synchronization and update based on dynamic navigation coordinate system.
Figure 3. Schematic diagram of position synchronization and update based on dynamic navigation coordinate system.
Aerospace 10 00943 g003
Figure 4. Schematic diagram of single-image based object detection.
Figure 4. Schematic diagram of single-image based object detection.
Aerospace 10 00943 g004
Figure 5. Points selected for Monte Carlo simulation to obtain maximum error. A, B, C and D correspond to the maximum error position and E corresponds to the minimum error position.
Figure 5. Points selected for Monte Carlo simulation to obtain maximum error. A, B, C and D correspond to the maximum error position and E corresponds to the minimum error position.
Aerospace 10 00943 g005
Figure 6. Distribution of AOA errors of A. (a) shows the distribution of altitude AOA error, with mean −72.747 and standard deviation 0.810; (b) shows the distribution of azimuth AOA error, with mean 143.377 and standard deviation 2.734.
Figure 6. Distribution of AOA errors of A. (a) shows the distribution of altitude AOA error, with mean −72.747 and standard deviation 0.810; (b) shows the distribution of azimuth AOA error, with mean 143.377 and standard deviation 2.734.
Aerospace 10 00943 g006
Figure 7. Distribution of AOA errors of B. (a) shows the distribution of altitude AOA error, with mean −72.733 and standard deviation 0.803; (b) shows the distribution of azimuth AOA error, with mean 36.590 and standard deviation 2.701.
Figure 7. Distribution of AOA errors of B. (a) shows the distribution of altitude AOA error, with mean −72.733 and standard deviation 0.803; (b) shows the distribution of azimuth AOA error, with mean 36.590 and standard deviation 2.701.
Aerospace 10 00943 g007
Figure 8. Distribution of AOA errors of C. (a) shows the distribution of altitude AOA error, with mean −72.702 and standard deviation 0.799; (b) shows the distribution of azimuth AOA error, with mean −36.655 and standard deviation 2.715.
Figure 8. Distribution of AOA errors of C. (a) shows the distribution of altitude AOA error, with mean −72.702 and standard deviation 0.799; (b) shows the distribution of azimuth AOA error, with mean −36.655 and standard deviation 2.715.
Aerospace 10 00943 g008
Figure 9. Distribution of AOA errors of D. (a) shows the distribution of altitude AOA error, with mean −72.726 and standard deviation 0.808; (b) shows the distribution of azimuth AOA error, with mean −143.310 and standard deviation 2.727.
Figure 9. Distribution of AOA errors of D. (a) shows the distribution of altitude AOA error, with mean −72.726 and standard deviation 0.808; (b) shows the distribution of azimuth AOA error, with mean −143.310 and standard deviation 2.727.
Aerospace 10 00943 g009
Figure 10. Two-UAV cooperative reconnaissance PDOP simulation (Baseline: 200 m and 100 m).
Figure 10. Two-UAV cooperative reconnaissance PDOP simulation (Baseline: 200 m and 100 m).
Aerospace 10 00943 g010
Figure 11. PDOP and Monte Carlo simulation position error conture map of square formation flying.
Figure 11. PDOP and Monte Carlo simulation position error conture map of square formation flying.
Aerospace 10 00943 g011
Figure 12. PDOP and Monte Carlo simulation position error conture map of diamond formation flying.
Figure 12. PDOP and Monte Carlo simulation position error conture map of diamond formation flying.
Aerospace 10 00943 g012
Figure 13. Flow of the Cubature Kalman Filtering algorithm.
Figure 13. Flow of the Cubature Kalman Filtering algorithm.
Aerospace 10 00943 g013
Figure 14. The ground images captured by two drones at the same time, with the number “28” in the red box as the ground identifier, and their true coordinates are known.
Figure 14. The ground images captured by two drones at the same time, with the number “28” in the red box as the ground identifier, and their true coordinates are known.
Aerospace 10 00943 g014aAerospace 10 00943 g014b
Figure 15. The distribution of actual target position and localization result in a two-dimensional plane.
Figure 15. The distribution of actual target position and localization result in a two-dimensional plane.
Aerospace 10 00943 g015
Figure 16. Diagrammatic representation of the noise interference on the drone’s position, superimposed on the motion trajectory, with a mean of 0 and a standard deviation of 10 m.
Figure 16. Diagrammatic representation of the noise interference on the drone’s position, superimposed on the motion trajectory, with a mean of 0 and a standard deviation of 10 m.
Aerospace 10 00943 g016
Figure 17. The position result for stationary target.
Figure 17. The position result for stationary target.
Aerospace 10 00943 g017
Figure 18. Moving target localization and tracking trajectory status (2D).
Figure 18. Moving target localization and tracking trajectory status (2D).
Aerospace 10 00943 g018
Figure 19. Tracking status of moving target position component (2D).
Figure 19. Tracking status of moving target position component (2D).
Aerospace 10 00943 g019
Figure 20. RMSE of target position and velocity prediction.
Figure 20. RMSE of target position and velocity prediction.
Aerospace 10 00943 g020
Table 1. Observation parameters.
Table 1. Observation parameters.
ParameterSignalNumerical Value
pitch/° ϑ b j 1.2
yaw/° ϕ b j 0
roll/° γ b j −0.5
Installation pitch/° ϑ p j 0
Installation yaw/° ϕ p j 0
Installation roll/° γ p j 0
Frame altitude angle/° ϑ c j −1.2
Frame azimuth angle/° ϕ c j 0.5
Look down angle/° ϑ x s j −90
Table 2. Electro-optical reconnaissance equipment parameters.
Table 2. Electro-optical reconnaissance equipment parameters.
ParameterSignalValue
Field of view β ( j ) × α ( j ) 28° × 21°
Resolution P x U m a x ( j ) × P x V m a x ( j ) 4096 × 3072
Horizontal half field of view β ( j ) 1 / 2 14°
Vertical half field of view α ( j ) 1 / 2 10.5°
Principal point position( u 0 ( j ) , v 0 ( j ) )(2047, 1535)
Table 3. Observation parameters.
Table 3. Observation parameters.
ParameterSignalError Range
Pitch error/° δ ϑ b j N (0, 0.8)
Yaw error/° δ ϕ b j N (0, 0.8)
Roll error/° δ γ b j N (0, 0.8)
Installation pitch error/° δ ϑ p j N (0, 0.2)
Installation yaw error/° δ ϕ p j N (0, 0.2)
Installation roll error/° δ γ p j N (0, 0.2)
Frame altitude angle error/° δ ϑ c j N (0, 0.1)
Frame azimuth angle error/° δ ϕ c j N (0, 0.1)
Table 4. AOA of measurements at each point under condition 1.
Table 4. AOA of measurements at each point under condition 1.
IDPointPixel
Coordinate
Distribution of AOA Altitude Angle/°(Monte Carlo)Distribution of AOA Altitude Angle/°(PDOP)Distribution of AOA Azimuth Angle/°(Monte Carlo)Distribution of AOA Altitude Angle/°(PDOP)
1A(0, 0)N (−72.747, 0.810)N (−72.75, 0.806)N (143.377, 2.734)N (143.38, 2.717)
2B(4096, 0)N (−72.733, 0.803)N (−72.74, 0.806)N (36.590, 2.701)N (36.59, 2.715)
3C(4096, 3072)N (−72.702, 0.799)N (−72.73, 0.806)N (−36.655, 2.715)N (−36.63, 2.714)
4D(0, 3072)N (−72.726, 0.808)N (−72.74, 0.806)N (−143.310, 2.727)N (−143.34, 2.716)
Table 5. Coordinate distribution of UAVs.
Table 5. Coordinate distribution of UAVs.
Point CoordinatesSymbolValue 1/mValue 2/m
UAV1(x1, y1, z1)(50, 0, 0)(25, 0, 0)
UAV2(x2, y2, z2)(−50, 0, 0)(−25, 0, 0)
Table 6. Coordinate distribution of UAVs.
Table 6. Coordinate distribution of UAVs.
Point CoordinatesSymbolSquare Formation Flying/mDiamond Formation Flying/m
UAV1(x1, z1)(100, 100)(0, 100)
UAV2(x2, z2)(100, −100)(0, −100)
UAV3(x3, z3)(−100, 100)(200, 0)
UAV4(x4, z4)(−100, −100)(−200, 0)
PDOP Contour line is distributed as follows.
Table 7. States of 12 UAVs using for localization.
Table 7. States of 12 UAVs using for localization.
UAV
ID
UAV PoseInstallation Angle
(Pitch, Yaw, Roll)
Frame Angle
(Alt., Azim.)
UAV Position
(Lng., Lat., Alt.)
UAV Attitude
(Pitch, Yaw, Roll)
1113.285672, 34.840742, 1988.73.25, 75.52, 1.32−90, 0, 0−3.27, −1.24
2113.283782, 34.840863, 1987.83.41, 75.55, 2.98−90, 0, 0−3.52, −2.69
3113.285258, 34.840542, 1986.62.75, 267.91, 0.89−90, 0, 0−2.76, −1.26
4113.284669, 34.841472, 1980.93.36, 267.42, 1.62−90, 0, 0−3.37, −1.78
5113.283662, 34.839757, 1978.02.49, 144.30, 0.40−90, 0, 0−2.54, −0.39
6113.284839, 34.840663, 982.33.34, 144.85, −0.08−90, 0, 0−3.40, 0.10
7113.283124, 34.840924, 982.23.75, 80.46, 3.67−90, 0, 0−3.79, −3.57
8113.283091, 34.840818, 980.33.60, 79.11, 4.12−90, 0, 0−3.62, −4.13
9113.285250, 34.840497, 980.16.15, 260.06, 3.47−90, 0, 0−6.23, −3.55
10113.284723, 34.841950, 978.64.82, 261.98, −0.96−90, 0, 0−4.82, 0.81
11113.284085, 34.840844, 976.02.79, 145.32, −1.25−90, 0, 0−2.79, 1.14
12113.285672, 34.840742, 1988.72.81, 145.20, 2.55−90, 0, 0−2.87, −2.37
Table 8. Two UAVs collaborative target positioning results (first group ID: 1&5, second group ID: 2&6, third group ID: 3&4, fourth group ID: 7&9, fifth group ID: 8&11, sixth group ID: 10&12).
Table 8. Two UAVs collaborative target positioning results (first group ID: 1&5, second group ID: 2&6, third group ID: 3&4, fourth group ID: 7&9, fifth group ID: 8&11, sixth group ID: 10&12).
Actual Target Position (2D)Localization Result (2D)Localization Error/m
Lng.Lat.Lng.Lat.
113.28407634.840799113.28415134.8407528.6
113.28411134.84071310.1
113.28417834.84075810.5
113.28414534.8407339.8
113.28415134.8407349.9
113.28415634.8407578.7
Table 9. Co-location simulation parameters.
Table 9. Co-location simulation parameters.
Simulation ParametersRange
Target initial state x T Stationary: [ 0 , 0 , 0 , 0 ] T
Moving: [ 0 ,   11.7851 ,   0 ,   11.7851 ] T
Number4
Error0.81° (AOA altitude angle)
2.73° (AOA azimuth angle)
NoiseGaussian White Noise
Sampling step500 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, M.; Zou, H.; Wang, T.; Zhu, K. A Cooperative Target Localization Method Based on UAV Aerial Images. Aerospace 2023, 10, 943. https://doi.org/10.3390/aerospace10110943

AMA Style

Du M, Zou H, Wang T, Zhu K. A Cooperative Target Localization Method Based on UAV Aerial Images. Aerospace. 2023; 10(11):943. https://doi.org/10.3390/aerospace10110943

Chicago/Turabian Style

Du, Minglei, Haodong Zou, Tinghui Wang, and Ke Zhu. 2023. "A Cooperative Target Localization Method Based on UAV Aerial Images" Aerospace 10, no. 11: 943. https://doi.org/10.3390/aerospace10110943

APA Style

Du, M., Zou, H., Wang, T., & Zhu, K. (2023). A Cooperative Target Localization Method Based on UAV Aerial Images. Aerospace, 10(11), 943. https://doi.org/10.3390/aerospace10110943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop